Test Report: KVM_Linux_crio 20363

                    
                      7e7f32fac0d8189b7e029c65d7fa3a0906f68836:2025-02-05:38218
                    
                

Test fail (11/321)

x
+
TestAddons/parallel/Ingress (154.61s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-395572 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-395572 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-395572 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [597824e3-3c20-407e-b032-c884d6df1ddd] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [597824e3-3c20-407e-b032-c884d6df1ddd] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.004254229s
I0205 02:07:00.508662   19989 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-395572 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-395572 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.689313057s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-395572 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-395572 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.234
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-395572 -n addons-395572
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-395572 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-395572 logs -n 25: (1.185559001s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-323625                                                                     | download-only-323625 | jenkins | v1.35.0 | 05 Feb 25 02:03 UTC | 05 Feb 25 02:03 UTC |
	| delete  | -p download-only-374995                                                                     | download-only-374995 | jenkins | v1.35.0 | 05 Feb 25 02:03 UTC | 05 Feb 25 02:03 UTC |
	| delete  | -p download-only-323625                                                                     | download-only-323625 | jenkins | v1.35.0 | 05 Feb 25 02:03 UTC | 05 Feb 25 02:03 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-397529 | jenkins | v1.35.0 | 05 Feb 25 02:03 UTC |                     |
	|         | binary-mirror-397529                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:36929                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-397529                                                                     | binary-mirror-397529 | jenkins | v1.35.0 | 05 Feb 25 02:03 UTC | 05 Feb 25 02:03 UTC |
	| addons  | disable dashboard -p                                                                        | addons-395572        | jenkins | v1.35.0 | 05 Feb 25 02:03 UTC |                     |
	|         | addons-395572                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-395572        | jenkins | v1.35.0 | 05 Feb 25 02:03 UTC |                     |
	|         | addons-395572                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-395572 --wait=true                                                                | addons-395572        | jenkins | v1.35.0 | 05 Feb 25 02:03 UTC | 05 Feb 25 02:06 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-395572 addons disable                                                                | addons-395572        | jenkins | v1.35.0 | 05 Feb 25 02:06 UTC | 05 Feb 25 02:06 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-395572 addons disable                                                                | addons-395572        | jenkins | v1.35.0 | 05 Feb 25 02:06 UTC | 05 Feb 25 02:06 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-395572        | jenkins | v1.35.0 | 05 Feb 25 02:06 UTC | 05 Feb 25 02:06 UTC |
	|         | -p addons-395572                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-395572 addons                                                                        | addons-395572        | jenkins | v1.35.0 | 05 Feb 25 02:06 UTC | 05 Feb 25 02:06 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-395572 addons disable                                                                | addons-395572        | jenkins | v1.35.0 | 05 Feb 25 02:06 UTC | 05 Feb 25 02:06 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-395572 addons disable                                                                | addons-395572        | jenkins | v1.35.0 | 05 Feb 25 02:06 UTC | 05 Feb 25 02:06 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-395572 ip                                                                            | addons-395572        | jenkins | v1.35.0 | 05 Feb 25 02:06 UTC | 05 Feb 25 02:06 UTC |
	| addons  | addons-395572 addons disable                                                                | addons-395572        | jenkins | v1.35.0 | 05 Feb 25 02:06 UTC | 05 Feb 25 02:06 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-395572 addons                                                                        | addons-395572        | jenkins | v1.35.0 | 05 Feb 25 02:06 UTC | 05 Feb 25 02:06 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-395572 addons                                                                        | addons-395572        | jenkins | v1.35.0 | 05 Feb 25 02:06 UTC | 05 Feb 25 02:06 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-395572 addons                                                                        | addons-395572        | jenkins | v1.35.0 | 05 Feb 25 02:06 UTC | 05 Feb 25 02:06 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-395572 ssh cat                                                                       | addons-395572        | jenkins | v1.35.0 | 05 Feb 25 02:06 UTC | 05 Feb 25 02:06 UTC |
	|         | /opt/local-path-provisioner/pvc-fecc77c1-5a4e-42cb-af0d-0ce82b98a634_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-395572 addons disable                                                                | addons-395572        | jenkins | v1.35.0 | 05 Feb 25 02:06 UTC | 05 Feb 25 02:06 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-395572 ssh curl -s                                                                   | addons-395572        | jenkins | v1.35.0 | 05 Feb 25 02:07 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-395572 addons                                                                        | addons-395572        | jenkins | v1.35.0 | 05 Feb 25 02:07 UTC | 05 Feb 25 02:07 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-395572 addons                                                                        | addons-395572        | jenkins | v1.35.0 | 05 Feb 25 02:07 UTC | 05 Feb 25 02:07 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-395572 ip                                                                            | addons-395572        | jenkins | v1.35.0 | 05 Feb 25 02:09 UTC | 05 Feb 25 02:09 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/05 02:03:49
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0205 02:03:49.822825   20618 out.go:345] Setting OutFile to fd 1 ...
	I0205 02:03:49.822939   20618 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 02:03:49.822948   20618 out.go:358] Setting ErrFile to fd 2...
	I0205 02:03:49.822952   20618 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 02:03:49.823133   20618 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20363-12788/.minikube/bin
	I0205 02:03:49.823719   20618 out.go:352] Setting JSON to false
	I0205 02:03:49.824524   20618 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":2781,"bootTime":1738718249,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0205 02:03:49.824637   20618 start.go:139] virtualization: kvm guest
	I0205 02:03:49.826667   20618 out.go:177] * [addons-395572] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0205 02:03:49.827979   20618 out.go:177]   - MINIKUBE_LOCATION=20363
	I0205 02:03:49.827988   20618 notify.go:220] Checking for updates...
	I0205 02:03:49.830328   20618 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0205 02:03:49.831540   20618 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20363-12788/kubeconfig
	I0205 02:03:49.832702   20618 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20363-12788/.minikube
	I0205 02:03:49.833916   20618 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0205 02:03:49.835146   20618 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0205 02:03:49.836357   20618 driver.go:394] Setting default libvirt URI to qemu:///system
	I0205 02:03:49.869586   20618 out.go:177] * Using the kvm2 driver based on user configuration
	I0205 02:03:49.870673   20618 start.go:297] selected driver: kvm2
	I0205 02:03:49.870686   20618 start.go:901] validating driver "kvm2" against <nil>
	I0205 02:03:49.870699   20618 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0205 02:03:49.871377   20618 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0205 02:03:49.871503   20618 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20363-12788/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0205 02:03:49.886834   20618 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0205 02:03:49.886879   20618 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0205 02:03:49.887103   20618 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0205 02:03:49.887130   20618 cni.go:84] Creating CNI manager for ""
	I0205 02:03:49.887168   20618 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0205 02:03:49.887177   20618 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0205 02:03:49.887217   20618 start.go:340] cluster config:
	{Name:addons-395572 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-395572 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPause
Interval:1m0s}
	I0205 02:03:49.887316   20618 iso.go:125] acquiring lock: {Name:mk486603e8d6546d81b9e7d0a893261360630790 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0205 02:03:49.889009   20618 out.go:177] * Starting "addons-395572" primary control-plane node in "addons-395572" cluster
	I0205 02:03:49.890340   20618 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0205 02:03:49.890375   20618 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20363-12788/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0205 02:03:49.890384   20618 cache.go:56] Caching tarball of preloaded images
	I0205 02:03:49.890469   20618 preload.go:172] Found /home/jenkins/minikube-integration/20363-12788/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0205 02:03:49.890483   20618 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0205 02:03:49.890772   20618 profile.go:143] Saving config to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/config.json ...
	I0205 02:03:49.890791   20618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/config.json: {Name:mk72b48e33089af7676d8d0f2d4f09db6e3de184 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 02:03:49.890922   20618 start.go:360] acquireMachinesLock for addons-395572: {Name:mka859d8706e94e04a549fcebb98cdac86bfe5a9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0205 02:03:49.890967   20618 start.go:364] duration metric: took 31.666µs to acquireMachinesLock for "addons-395572"
	I0205 02:03:49.890985   20618 start.go:93] Provisioning new machine with config: &{Name:addons-395572 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-395572 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0205 02:03:49.891036   20618 start.go:125] createHost starting for "" (driver="kvm2")
	I0205 02:03:49.892597   20618 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0205 02:03:49.892739   20618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:03:49.892798   20618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:03:49.907105   20618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40175
	I0205 02:03:49.907582   20618 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:03:49.908098   20618 main.go:141] libmachine: Using API Version  1
	I0205 02:03:49.908119   20618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:03:49.908509   20618 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:03:49.908698   20618 main.go:141] libmachine: (addons-395572) Calling .GetMachineName
	I0205 02:03:49.908853   20618 main.go:141] libmachine: (addons-395572) Calling .DriverName
	I0205 02:03:49.908982   20618 start.go:159] libmachine.API.Create for "addons-395572" (driver="kvm2")
	I0205 02:03:49.909013   20618 client.go:168] LocalClient.Create starting
	I0205 02:03:49.909052   20618 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem
	I0205 02:03:50.072471   20618 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/cert.pem
	I0205 02:03:50.375764   20618 main.go:141] libmachine: Running pre-create checks...
	I0205 02:03:50.375790   20618 main.go:141] libmachine: (addons-395572) Calling .PreCreateCheck
	I0205 02:03:50.376267   20618 main.go:141] libmachine: (addons-395572) Calling .GetConfigRaw
	I0205 02:03:50.376670   20618 main.go:141] libmachine: Creating machine...
	I0205 02:03:50.376683   20618 main.go:141] libmachine: (addons-395572) Calling .Create
	I0205 02:03:50.376797   20618 main.go:141] libmachine: (addons-395572) creating KVM machine...
	I0205 02:03:50.376814   20618 main.go:141] libmachine: (addons-395572) creating network...
	I0205 02:03:50.377991   20618 main.go:141] libmachine: (addons-395572) DBG | found existing default KVM network
	I0205 02:03:50.378690   20618 main.go:141] libmachine: (addons-395572) DBG | I0205 02:03:50.378537   20640 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015b80}
	I0205 02:03:50.378707   20618 main.go:141] libmachine: (addons-395572) DBG | created network xml: 
	I0205 02:03:50.378719   20618 main.go:141] libmachine: (addons-395572) DBG | <network>
	I0205 02:03:50.378725   20618 main.go:141] libmachine: (addons-395572) DBG |   <name>mk-addons-395572</name>
	I0205 02:03:50.378735   20618 main.go:141] libmachine: (addons-395572) DBG |   <dns enable='no'/>
	I0205 02:03:50.378746   20618 main.go:141] libmachine: (addons-395572) DBG |   
	I0205 02:03:50.378757   20618 main.go:141] libmachine: (addons-395572) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0205 02:03:50.378768   20618 main.go:141] libmachine: (addons-395572) DBG |     <dhcp>
	I0205 02:03:50.378778   20618 main.go:141] libmachine: (addons-395572) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0205 02:03:50.378792   20618 main.go:141] libmachine: (addons-395572) DBG |     </dhcp>
	I0205 02:03:50.378804   20618 main.go:141] libmachine: (addons-395572) DBG |   </ip>
	I0205 02:03:50.378814   20618 main.go:141] libmachine: (addons-395572) DBG |   
	I0205 02:03:50.378858   20618 main.go:141] libmachine: (addons-395572) DBG | </network>
	I0205 02:03:50.378879   20618 main.go:141] libmachine: (addons-395572) DBG | 
	I0205 02:03:50.384386   20618 main.go:141] libmachine: (addons-395572) DBG | trying to create private KVM network mk-addons-395572 192.168.39.0/24...
	I0205 02:03:50.445935   20618 main.go:141] libmachine: (addons-395572) DBG | private KVM network mk-addons-395572 192.168.39.0/24 created
	I0205 02:03:50.445973   20618 main.go:141] libmachine: (addons-395572) setting up store path in /home/jenkins/minikube-integration/20363-12788/.minikube/machines/addons-395572 ...
	I0205 02:03:50.445997   20618 main.go:141] libmachine: (addons-395572) building disk image from file:///home/jenkins/minikube-integration/20363-12788/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0205 02:03:50.446046   20618 main.go:141] libmachine: (addons-395572) DBG | I0205 02:03:50.445976   20640 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20363-12788/.minikube
	I0205 02:03:50.446192   20618 main.go:141] libmachine: (addons-395572) Downloading /home/jenkins/minikube-integration/20363-12788/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20363-12788/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0205 02:03:50.709957   20618 main.go:141] libmachine: (addons-395572) DBG | I0205 02:03:50.709787   20640 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20363-12788/.minikube/machines/addons-395572/id_rsa...
	I0205 02:03:50.998497   20618 main.go:141] libmachine: (addons-395572) DBG | I0205 02:03:50.998353   20640 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20363-12788/.minikube/machines/addons-395572/addons-395572.rawdisk...
	I0205 02:03:50.998531   20618 main.go:141] libmachine: (addons-395572) DBG | Writing magic tar header
	I0205 02:03:50.998544   20618 main.go:141] libmachine: (addons-395572) DBG | Writing SSH key tar header
	I0205 02:03:50.998555   20618 main.go:141] libmachine: (addons-395572) DBG | I0205 02:03:50.998464   20640 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20363-12788/.minikube/machines/addons-395572 ...
	I0205 02:03:50.998569   20618 main.go:141] libmachine: (addons-395572) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20363-12788/.minikube/machines/addons-395572
	I0205 02:03:50.998624   20618 main.go:141] libmachine: (addons-395572) setting executable bit set on /home/jenkins/minikube-integration/20363-12788/.minikube/machines/addons-395572 (perms=drwx------)
	I0205 02:03:50.998642   20618 main.go:141] libmachine: (addons-395572) setting executable bit set on /home/jenkins/minikube-integration/20363-12788/.minikube/machines (perms=drwxr-xr-x)
	I0205 02:03:50.998649   20618 main.go:141] libmachine: (addons-395572) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20363-12788/.minikube/machines
	I0205 02:03:50.998663   20618 main.go:141] libmachine: (addons-395572) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20363-12788/.minikube
	I0205 02:03:50.998673   20618 main.go:141] libmachine: (addons-395572) setting executable bit set on /home/jenkins/minikube-integration/20363-12788/.minikube (perms=drwxr-xr-x)
	I0205 02:03:50.998682   20618 main.go:141] libmachine: (addons-395572) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20363-12788
	I0205 02:03:50.998697   20618 main.go:141] libmachine: (addons-395572) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0205 02:03:50.998707   20618 main.go:141] libmachine: (addons-395572) setting executable bit set on /home/jenkins/minikube-integration/20363-12788 (perms=drwxrwxr-x)
	I0205 02:03:50.998715   20618 main.go:141] libmachine: (addons-395572) DBG | checking permissions on dir: /home/jenkins
	I0205 02:03:50.998725   20618 main.go:141] libmachine: (addons-395572) DBG | checking permissions on dir: /home
	I0205 02:03:50.998733   20618 main.go:141] libmachine: (addons-395572) DBG | skipping /home - not owner
	I0205 02:03:50.998743   20618 main.go:141] libmachine: (addons-395572) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0205 02:03:50.998751   20618 main.go:141] libmachine: (addons-395572) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0205 02:03:50.998765   20618 main.go:141] libmachine: (addons-395572) creating domain...
	I0205 02:03:50.999684   20618 main.go:141] libmachine: (addons-395572) define libvirt domain using xml: 
	I0205 02:03:50.999726   20618 main.go:141] libmachine: (addons-395572) <domain type='kvm'>
	I0205 02:03:50.999740   20618 main.go:141] libmachine: (addons-395572)   <name>addons-395572</name>
	I0205 02:03:50.999747   20618 main.go:141] libmachine: (addons-395572)   <memory unit='MiB'>4000</memory>
	I0205 02:03:50.999755   20618 main.go:141] libmachine: (addons-395572)   <vcpu>2</vcpu>
	I0205 02:03:50.999761   20618 main.go:141] libmachine: (addons-395572)   <features>
	I0205 02:03:50.999788   20618 main.go:141] libmachine: (addons-395572)     <acpi/>
	I0205 02:03:50.999811   20618 main.go:141] libmachine: (addons-395572)     <apic/>
	I0205 02:03:50.999838   20618 main.go:141] libmachine: (addons-395572)     <pae/>
	I0205 02:03:50.999857   20618 main.go:141] libmachine: (addons-395572)     
	I0205 02:03:50.999867   20618 main.go:141] libmachine: (addons-395572)   </features>
	I0205 02:03:50.999875   20618 main.go:141] libmachine: (addons-395572)   <cpu mode='host-passthrough'>
	I0205 02:03:50.999890   20618 main.go:141] libmachine: (addons-395572)   
	I0205 02:03:50.999898   20618 main.go:141] libmachine: (addons-395572)   </cpu>
	I0205 02:03:50.999907   20618 main.go:141] libmachine: (addons-395572)   <os>
	I0205 02:03:50.999914   20618 main.go:141] libmachine: (addons-395572)     <type>hvm</type>
	I0205 02:03:50.999929   20618 main.go:141] libmachine: (addons-395572)     <boot dev='cdrom'/>
	I0205 02:03:50.999936   20618 main.go:141] libmachine: (addons-395572)     <boot dev='hd'/>
	I0205 02:03:50.999945   20618 main.go:141] libmachine: (addons-395572)     <bootmenu enable='no'/>
	I0205 02:03:50.999952   20618 main.go:141] libmachine: (addons-395572)   </os>
	I0205 02:03:50.999961   20618 main.go:141] libmachine: (addons-395572)   <devices>
	I0205 02:03:50.999970   20618 main.go:141] libmachine: (addons-395572)     <disk type='file' device='cdrom'>
	I0205 02:03:50.999987   20618 main.go:141] libmachine: (addons-395572)       <source file='/home/jenkins/minikube-integration/20363-12788/.minikube/machines/addons-395572/boot2docker.iso'/>
	I0205 02:03:51.000004   20618 main.go:141] libmachine: (addons-395572)       <target dev='hdc' bus='scsi'/>
	I0205 02:03:51.000016   20618 main.go:141] libmachine: (addons-395572)       <readonly/>
	I0205 02:03:51.000026   20618 main.go:141] libmachine: (addons-395572)     </disk>
	I0205 02:03:51.000039   20618 main.go:141] libmachine: (addons-395572)     <disk type='file' device='disk'>
	I0205 02:03:51.000051   20618 main.go:141] libmachine: (addons-395572)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0205 02:03:51.000065   20618 main.go:141] libmachine: (addons-395572)       <source file='/home/jenkins/minikube-integration/20363-12788/.minikube/machines/addons-395572/addons-395572.rawdisk'/>
	I0205 02:03:51.000080   20618 main.go:141] libmachine: (addons-395572)       <target dev='hda' bus='virtio'/>
	I0205 02:03:51.000092   20618 main.go:141] libmachine: (addons-395572)     </disk>
	I0205 02:03:51.000104   20618 main.go:141] libmachine: (addons-395572)     <interface type='network'>
	I0205 02:03:51.000114   20618 main.go:141] libmachine: (addons-395572)       <source network='mk-addons-395572'/>
	I0205 02:03:51.000125   20618 main.go:141] libmachine: (addons-395572)       <model type='virtio'/>
	I0205 02:03:51.000137   20618 main.go:141] libmachine: (addons-395572)     </interface>
	I0205 02:03:51.000150   20618 main.go:141] libmachine: (addons-395572)     <interface type='network'>
	I0205 02:03:51.000162   20618 main.go:141] libmachine: (addons-395572)       <source network='default'/>
	I0205 02:03:51.000176   20618 main.go:141] libmachine: (addons-395572)       <model type='virtio'/>
	I0205 02:03:51.000188   20618 main.go:141] libmachine: (addons-395572)     </interface>
	I0205 02:03:51.000196   20618 main.go:141] libmachine: (addons-395572)     <serial type='pty'>
	I0205 02:03:51.000208   20618 main.go:141] libmachine: (addons-395572)       <target port='0'/>
	I0205 02:03:51.000220   20618 main.go:141] libmachine: (addons-395572)     </serial>
	I0205 02:03:51.000240   20618 main.go:141] libmachine: (addons-395572)     <console type='pty'>
	I0205 02:03:51.000249   20618 main.go:141] libmachine: (addons-395572)       <target type='serial' port='0'/>
	I0205 02:03:51.000261   20618 main.go:141] libmachine: (addons-395572)     </console>
	I0205 02:03:51.000271   20618 main.go:141] libmachine: (addons-395572)     <rng model='virtio'>
	I0205 02:03:51.000282   20618 main.go:141] libmachine: (addons-395572)       <backend model='random'>/dev/random</backend>
	I0205 02:03:51.000296   20618 main.go:141] libmachine: (addons-395572)     </rng>
	I0205 02:03:51.000307   20618 main.go:141] libmachine: (addons-395572)     
	I0205 02:03:51.000317   20618 main.go:141] libmachine: (addons-395572)     
	I0205 02:03:51.000333   20618 main.go:141] libmachine: (addons-395572)   </devices>
	I0205 02:03:51.000341   20618 main.go:141] libmachine: (addons-395572) </domain>
	I0205 02:03:51.000354   20618 main.go:141] libmachine: (addons-395572) 
	I0205 02:03:51.006710   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined MAC address 52:54:00:a3:6e:5f in network default
	I0205 02:03:51.007330   20618 main.go:141] libmachine: (addons-395572) starting domain...
	I0205 02:03:51.007352   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:03:51.007362   20618 main.go:141] libmachine: (addons-395572) ensuring networks are active...
	I0205 02:03:51.007951   20618 main.go:141] libmachine: (addons-395572) Ensuring network default is active
	I0205 02:03:51.008309   20618 main.go:141] libmachine: (addons-395572) Ensuring network mk-addons-395572 is active
	I0205 02:03:51.008921   20618 main.go:141] libmachine: (addons-395572) getting domain XML...
	I0205 02:03:51.009693   20618 main.go:141] libmachine: (addons-395572) creating domain...
	I0205 02:03:52.387713   20618 main.go:141] libmachine: (addons-395572) waiting for IP...
	I0205 02:03:52.388343   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:03:52.388650   20618 main.go:141] libmachine: (addons-395572) DBG | unable to find current IP address of domain addons-395572 in network mk-addons-395572
	I0205 02:03:52.388713   20618 main.go:141] libmachine: (addons-395572) DBG | I0205 02:03:52.388659   20640 retry.go:31] will retry after 290.096512ms: waiting for domain to come up
	I0205 02:03:52.679931   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:03:52.680267   20618 main.go:141] libmachine: (addons-395572) DBG | unable to find current IP address of domain addons-395572 in network mk-addons-395572
	I0205 02:03:52.680293   20618 main.go:141] libmachine: (addons-395572) DBG | I0205 02:03:52.680234   20640 retry.go:31] will retry after 361.746539ms: waiting for domain to come up
	I0205 02:03:53.043725   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:03:53.044089   20618 main.go:141] libmachine: (addons-395572) DBG | unable to find current IP address of domain addons-395572 in network mk-addons-395572
	I0205 02:03:53.044115   20618 main.go:141] libmachine: (addons-395572) DBG | I0205 02:03:53.044060   20640 retry.go:31] will retry after 408.104131ms: waiting for domain to come up
	I0205 02:03:53.453746   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:03:53.454192   20618 main.go:141] libmachine: (addons-395572) DBG | unable to find current IP address of domain addons-395572 in network mk-addons-395572
	I0205 02:03:53.454223   20618 main.go:141] libmachine: (addons-395572) DBG | I0205 02:03:53.454153   20640 retry.go:31] will retry after 466.56875ms: waiting for domain to come up
	I0205 02:03:53.921712   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:03:53.922153   20618 main.go:141] libmachine: (addons-395572) DBG | unable to find current IP address of domain addons-395572 in network mk-addons-395572
	I0205 02:03:53.922174   20618 main.go:141] libmachine: (addons-395572) DBG | I0205 02:03:53.922114   20640 retry.go:31] will retry after 584.435568ms: waiting for domain to come up
	I0205 02:03:54.507826   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:03:54.508297   20618 main.go:141] libmachine: (addons-395572) DBG | unable to find current IP address of domain addons-395572 in network mk-addons-395572
	I0205 02:03:54.508338   20618 main.go:141] libmachine: (addons-395572) DBG | I0205 02:03:54.508269   20640 retry.go:31] will retry after 630.185065ms: waiting for domain to come up
	I0205 02:03:55.139569   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:03:55.139926   20618 main.go:141] libmachine: (addons-395572) DBG | unable to find current IP address of domain addons-395572 in network mk-addons-395572
	I0205 02:03:55.139958   20618 main.go:141] libmachine: (addons-395572) DBG | I0205 02:03:55.139916   20640 retry.go:31] will retry after 1.071114656s: waiting for domain to come up
	I0205 02:03:56.213025   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:03:56.213449   20618 main.go:141] libmachine: (addons-395572) DBG | unable to find current IP address of domain addons-395572 in network mk-addons-395572
	I0205 02:03:56.213478   20618 main.go:141] libmachine: (addons-395572) DBG | I0205 02:03:56.213414   20640 retry.go:31] will retry after 1.164890148s: waiting for domain to come up
	I0205 02:03:57.380884   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:03:57.381397   20618 main.go:141] libmachine: (addons-395572) DBG | unable to find current IP address of domain addons-395572 in network mk-addons-395572
	I0205 02:03:57.381418   20618 main.go:141] libmachine: (addons-395572) DBG | I0205 02:03:57.381334   20640 retry.go:31] will retry after 1.424158581s: waiting for domain to come up
	I0205 02:03:58.807909   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:03:58.808292   20618 main.go:141] libmachine: (addons-395572) DBG | unable to find current IP address of domain addons-395572 in network mk-addons-395572
	I0205 02:03:58.808318   20618 main.go:141] libmachine: (addons-395572) DBG | I0205 02:03:58.808256   20640 retry.go:31] will retry after 1.914702659s: waiting for domain to come up
	I0205 02:04:00.724835   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:00.725399   20618 main.go:141] libmachine: (addons-395572) DBG | unable to find current IP address of domain addons-395572 in network mk-addons-395572
	I0205 02:04:00.725434   20618 main.go:141] libmachine: (addons-395572) DBG | I0205 02:04:00.725310   20640 retry.go:31] will retry after 2.448787209s: waiting for domain to come up
	I0205 02:04:03.176779   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:03.177016   20618 main.go:141] libmachine: (addons-395572) DBG | unable to find current IP address of domain addons-395572 in network mk-addons-395572
	I0205 02:04:03.177038   20618 main.go:141] libmachine: (addons-395572) DBG | I0205 02:04:03.176983   20640 retry.go:31] will retry after 3.226111618s: waiting for domain to come up
	I0205 02:04:06.404723   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:06.405067   20618 main.go:141] libmachine: (addons-395572) DBG | unable to find current IP address of domain addons-395572 in network mk-addons-395572
	I0205 02:04:06.405094   20618 main.go:141] libmachine: (addons-395572) DBG | I0205 02:04:06.405033   20640 retry.go:31] will retry after 3.008765051s: waiting for domain to come up
	I0205 02:04:09.417179   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:09.417603   20618 main.go:141] libmachine: (addons-395572) DBG | unable to find current IP address of domain addons-395572 in network mk-addons-395572
	I0205 02:04:09.417633   20618 main.go:141] libmachine: (addons-395572) DBG | I0205 02:04:09.417586   20640 retry.go:31] will retry after 5.442347003s: waiting for domain to come up
	I0205 02:04:14.865383   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:14.865843   20618 main.go:141] libmachine: (addons-395572) found domain IP: 192.168.39.234
	I0205 02:04:14.865863   20618 main.go:141] libmachine: (addons-395572) reserving static IP address...
	I0205 02:04:14.865872   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has current primary IP address 192.168.39.234 and MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:14.866265   20618 main.go:141] libmachine: (addons-395572) DBG | unable to find host DHCP lease matching {name: "addons-395572", mac: "52:54:00:e9:87:50", ip: "192.168.39.234"} in network mk-addons-395572
	I0205 02:04:14.934738   20618 main.go:141] libmachine: (addons-395572) reserved static IP address 192.168.39.234 for domain addons-395572
	I0205 02:04:14.934771   20618 main.go:141] libmachine: (addons-395572) DBG | Getting to WaitForSSH function...
	I0205 02:04:14.934779   20618 main.go:141] libmachine: (addons-395572) waiting for SSH...
	I0205 02:04:14.937216   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:14.937603   20618 main.go:141] libmachine: (addons-395572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:87:50", ip: ""} in network mk-addons-395572: {Iface:virbr1 ExpiryTime:2025-02-05 03:04:05 +0000 UTC Type:0 Mac:52:54:00:e9:87:50 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e9:87:50}
	I0205 02:04:14.937633   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined IP address 192.168.39.234 and MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:14.937815   20618 main.go:141] libmachine: (addons-395572) DBG | Using SSH client type: external
	I0205 02:04:14.937844   20618 main.go:141] libmachine: (addons-395572) DBG | Using SSH private key: /home/jenkins/minikube-integration/20363-12788/.minikube/machines/addons-395572/id_rsa (-rw-------)
	I0205 02:04:14.937872   20618 main.go:141] libmachine: (addons-395572) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.234 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20363-12788/.minikube/machines/addons-395572/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0205 02:04:14.937888   20618 main.go:141] libmachine: (addons-395572) DBG | About to run SSH command:
	I0205 02:04:14.937900   20618 main.go:141] libmachine: (addons-395572) DBG | exit 0
	I0205 02:04:15.065527   20618 main.go:141] libmachine: (addons-395572) DBG | SSH cmd err, output: <nil>: 
	I0205 02:04:15.065811   20618 main.go:141] libmachine: (addons-395572) KVM machine creation complete
	I0205 02:04:15.066054   20618 main.go:141] libmachine: (addons-395572) Calling .GetConfigRaw
	I0205 02:04:15.066652   20618 main.go:141] libmachine: (addons-395572) Calling .DriverName
	I0205 02:04:15.066834   20618 main.go:141] libmachine: (addons-395572) Calling .DriverName
	I0205 02:04:15.066968   20618 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0205 02:04:15.066983   20618 main.go:141] libmachine: (addons-395572) Calling .GetState
	I0205 02:04:15.068092   20618 main.go:141] libmachine: Detecting operating system of created instance...
	I0205 02:04:15.068105   20618 main.go:141] libmachine: Waiting for SSH to be available...
	I0205 02:04:15.068113   20618 main.go:141] libmachine: Getting to WaitForSSH function...
	I0205 02:04:15.068121   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHHostname
	I0205 02:04:15.070300   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:15.070660   20618 main.go:141] libmachine: (addons-395572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:87:50", ip: ""} in network mk-addons-395572: {Iface:virbr1 ExpiryTime:2025-02-05 03:04:05 +0000 UTC Type:0 Mac:52:54:00:e9:87:50 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-395572 Clientid:01:52:54:00:e9:87:50}
	I0205 02:04:15.070687   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined IP address 192.168.39.234 and MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:15.070828   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHPort
	I0205 02:04:15.070993   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHKeyPath
	I0205 02:04:15.071113   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHKeyPath
	I0205 02:04:15.071240   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHUsername
	I0205 02:04:15.071369   20618 main.go:141] libmachine: Using SSH client type: native
	I0205 02:04:15.071540   20618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0205 02:04:15.071552   20618 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0205 02:04:15.164489   20618 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0205 02:04:15.164513   20618 main.go:141] libmachine: Detecting the provisioner...
	I0205 02:04:15.164520   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHHostname
	I0205 02:04:15.166925   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:15.167180   20618 main.go:141] libmachine: (addons-395572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:87:50", ip: ""} in network mk-addons-395572: {Iface:virbr1 ExpiryTime:2025-02-05 03:04:05 +0000 UTC Type:0 Mac:52:54:00:e9:87:50 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-395572 Clientid:01:52:54:00:e9:87:50}
	I0205 02:04:15.167213   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined IP address 192.168.39.234 and MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:15.167381   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHPort
	I0205 02:04:15.167555   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHKeyPath
	I0205 02:04:15.167744   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHKeyPath
	I0205 02:04:15.167853   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHUsername
	I0205 02:04:15.168012   20618 main.go:141] libmachine: Using SSH client type: native
	I0205 02:04:15.168175   20618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0205 02:04:15.168186   20618 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0205 02:04:15.266007   20618 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0205 02:04:15.266095   20618 main.go:141] libmachine: found compatible host: buildroot
	I0205 02:04:15.266108   20618 main.go:141] libmachine: Provisioning with buildroot...
	I0205 02:04:15.266117   20618 main.go:141] libmachine: (addons-395572) Calling .GetMachineName
	I0205 02:04:15.266357   20618 buildroot.go:166] provisioning hostname "addons-395572"
	I0205 02:04:15.266385   20618 main.go:141] libmachine: (addons-395572) Calling .GetMachineName
	I0205 02:04:15.266597   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHHostname
	I0205 02:04:15.269135   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:15.269529   20618 main.go:141] libmachine: (addons-395572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:87:50", ip: ""} in network mk-addons-395572: {Iface:virbr1 ExpiryTime:2025-02-05 03:04:05 +0000 UTC Type:0 Mac:52:54:00:e9:87:50 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-395572 Clientid:01:52:54:00:e9:87:50}
	I0205 02:04:15.269559   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined IP address 192.168.39.234 and MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:15.269718   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHPort
	I0205 02:04:15.269896   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHKeyPath
	I0205 02:04:15.270034   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHKeyPath
	I0205 02:04:15.270160   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHUsername
	I0205 02:04:15.270308   20618 main.go:141] libmachine: Using SSH client type: native
	I0205 02:04:15.270485   20618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0205 02:04:15.270502   20618 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-395572 && echo "addons-395572" | sudo tee /etc/hostname
	I0205 02:04:15.378766   20618 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-395572
	
	I0205 02:04:15.378797   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHHostname
	I0205 02:04:15.381280   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:15.381606   20618 main.go:141] libmachine: (addons-395572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:87:50", ip: ""} in network mk-addons-395572: {Iface:virbr1 ExpiryTime:2025-02-05 03:04:05 +0000 UTC Type:0 Mac:52:54:00:e9:87:50 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-395572 Clientid:01:52:54:00:e9:87:50}
	I0205 02:04:15.381634   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined IP address 192.168.39.234 and MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:15.381756   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHPort
	I0205 02:04:15.381936   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHKeyPath
	I0205 02:04:15.382089   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHKeyPath
	I0205 02:04:15.382218   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHUsername
	I0205 02:04:15.382334   20618 main.go:141] libmachine: Using SSH client type: native
	I0205 02:04:15.382488   20618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0205 02:04:15.382504   20618 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-395572' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-395572/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-395572' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0205 02:04:15.485486   20618 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0205 02:04:15.485522   20618 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20363-12788/.minikube CaCertPath:/home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20363-12788/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20363-12788/.minikube}
	I0205 02:04:15.485559   20618 buildroot.go:174] setting up certificates
	I0205 02:04:15.485569   20618 provision.go:84] configureAuth start
	I0205 02:04:15.485579   20618 main.go:141] libmachine: (addons-395572) Calling .GetMachineName
	I0205 02:04:15.485858   20618 main.go:141] libmachine: (addons-395572) Calling .GetIP
	I0205 02:04:15.488141   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:15.488457   20618 main.go:141] libmachine: (addons-395572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:87:50", ip: ""} in network mk-addons-395572: {Iface:virbr1 ExpiryTime:2025-02-05 03:04:05 +0000 UTC Type:0 Mac:52:54:00:e9:87:50 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-395572 Clientid:01:52:54:00:e9:87:50}
	I0205 02:04:15.488486   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined IP address 192.168.39.234 and MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:15.488639   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHHostname
	I0205 02:04:15.490625   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:15.490943   20618 main.go:141] libmachine: (addons-395572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:87:50", ip: ""} in network mk-addons-395572: {Iface:virbr1 ExpiryTime:2025-02-05 03:04:05 +0000 UTC Type:0 Mac:52:54:00:e9:87:50 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-395572 Clientid:01:52:54:00:e9:87:50}
	I0205 02:04:15.490975   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined IP address 192.168.39.234 and MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:15.491104   20618 provision.go:143] copyHostCerts
	I0205 02:04:15.491201   20618 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20363-12788/.minikube/key.pem (1675 bytes)
	I0205 02:04:15.491355   20618 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20363-12788/.minikube/ca.pem (1082 bytes)
	I0205 02:04:15.491444   20618 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20363-12788/.minikube/cert.pem (1123 bytes)
	I0205 02:04:15.491519   20618 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20363-12788/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca-key.pem org=jenkins.addons-395572 san=[127.0.0.1 192.168.39.234 addons-395572 localhost minikube]
	I0205 02:04:15.571147   20618 provision.go:177] copyRemoteCerts
	I0205 02:04:15.571218   20618 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0205 02:04:15.571252   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHHostname
	I0205 02:04:15.573865   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:15.574225   20618 main.go:141] libmachine: (addons-395572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:87:50", ip: ""} in network mk-addons-395572: {Iface:virbr1 ExpiryTime:2025-02-05 03:04:05 +0000 UTC Type:0 Mac:52:54:00:e9:87:50 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-395572 Clientid:01:52:54:00:e9:87:50}
	I0205 02:04:15.574243   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined IP address 192.168.39.234 and MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:15.574414   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHPort
	I0205 02:04:15.574590   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHKeyPath
	I0205 02:04:15.574715   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHUsername
	I0205 02:04:15.574866   20618 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/addons-395572/id_rsa Username:docker}
	I0205 02:04:15.650917   20618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0205 02:04:15.673986   20618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0205 02:04:15.695956   20618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0205 02:04:15.717722   20618 provision.go:87] duration metric: took 232.137258ms to configureAuth
	I0205 02:04:15.717743   20618 buildroot.go:189] setting minikube options for container-runtime
	I0205 02:04:15.717922   20618 config.go:182] Loaded profile config "addons-395572": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 02:04:15.718008   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHHostname
	I0205 02:04:15.720830   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:15.721200   20618 main.go:141] libmachine: (addons-395572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:87:50", ip: ""} in network mk-addons-395572: {Iface:virbr1 ExpiryTime:2025-02-05 03:04:05 +0000 UTC Type:0 Mac:52:54:00:e9:87:50 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-395572 Clientid:01:52:54:00:e9:87:50}
	I0205 02:04:15.721229   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined IP address 192.168.39.234 and MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:15.721454   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHPort
	I0205 02:04:15.721644   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHKeyPath
	I0205 02:04:15.721826   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHKeyPath
	I0205 02:04:15.721964   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHUsername
	I0205 02:04:15.722134   20618 main.go:141] libmachine: Using SSH client type: native
	I0205 02:04:15.722338   20618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0205 02:04:15.722360   20618 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0205 02:04:15.948264   20618 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0205 02:04:15.948298   20618 main.go:141] libmachine: Checking connection to Docker...
	I0205 02:04:15.948310   20618 main.go:141] libmachine: (addons-395572) Calling .GetURL
	I0205 02:04:15.949586   20618 main.go:141] libmachine: (addons-395572) DBG | using libvirt version 6000000
	I0205 02:04:15.951402   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:15.951776   20618 main.go:141] libmachine: (addons-395572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:87:50", ip: ""} in network mk-addons-395572: {Iface:virbr1 ExpiryTime:2025-02-05 03:04:05 +0000 UTC Type:0 Mac:52:54:00:e9:87:50 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-395572 Clientid:01:52:54:00:e9:87:50}
	I0205 02:04:15.951808   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined IP address 192.168.39.234 and MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:15.951955   20618 main.go:141] libmachine: Docker is up and running!
	I0205 02:04:15.951971   20618 main.go:141] libmachine: Reticulating splines...
	I0205 02:04:15.951980   20618 client.go:171] duration metric: took 26.042956289s to LocalClient.Create
	I0205 02:04:15.952003   20618 start.go:167] duration metric: took 26.043024202s to libmachine.API.Create "addons-395572"
	I0205 02:04:15.952022   20618 start.go:293] postStartSetup for "addons-395572" (driver="kvm2")
	I0205 02:04:15.952034   20618 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0205 02:04:15.952052   20618 main.go:141] libmachine: (addons-395572) Calling .DriverName
	I0205 02:04:15.952290   20618 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0205 02:04:15.952319   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHHostname
	I0205 02:04:15.954654   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:15.954959   20618 main.go:141] libmachine: (addons-395572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:87:50", ip: ""} in network mk-addons-395572: {Iface:virbr1 ExpiryTime:2025-02-05 03:04:05 +0000 UTC Type:0 Mac:52:54:00:e9:87:50 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-395572 Clientid:01:52:54:00:e9:87:50}
	I0205 02:04:15.954979   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined IP address 192.168.39.234 and MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:15.955148   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHPort
	I0205 02:04:15.955331   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHKeyPath
	I0205 02:04:15.955487   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHUsername
	I0205 02:04:15.955619   20618 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/addons-395572/id_rsa Username:docker}
	I0205 02:04:16.031084   20618 ssh_runner.go:195] Run: cat /etc/os-release
	I0205 02:04:16.035345   20618 info.go:137] Remote host: Buildroot 2023.02.9
	I0205 02:04:16.035367   20618 filesync.go:126] Scanning /home/jenkins/minikube-integration/20363-12788/.minikube/addons for local assets ...
	I0205 02:04:16.035443   20618 filesync.go:126] Scanning /home/jenkins/minikube-integration/20363-12788/.minikube/files for local assets ...
	I0205 02:04:16.035476   20618 start.go:296] duration metric: took 83.44532ms for postStartSetup
	I0205 02:04:16.035512   20618 main.go:141] libmachine: (addons-395572) Calling .GetConfigRaw
	I0205 02:04:16.036083   20618 main.go:141] libmachine: (addons-395572) Calling .GetIP
	I0205 02:04:16.038514   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:16.038805   20618 main.go:141] libmachine: (addons-395572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:87:50", ip: ""} in network mk-addons-395572: {Iface:virbr1 ExpiryTime:2025-02-05 03:04:05 +0000 UTC Type:0 Mac:52:54:00:e9:87:50 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-395572 Clientid:01:52:54:00:e9:87:50}
	I0205 02:04:16.038833   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined IP address 192.168.39.234 and MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:16.039057   20618 profile.go:143] Saving config to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/config.json ...
	I0205 02:04:16.039251   20618 start.go:128] duration metric: took 26.148204914s to createHost
	I0205 02:04:16.039278   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHHostname
	I0205 02:04:16.041310   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:16.041635   20618 main.go:141] libmachine: (addons-395572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:87:50", ip: ""} in network mk-addons-395572: {Iface:virbr1 ExpiryTime:2025-02-05 03:04:05 +0000 UTC Type:0 Mac:52:54:00:e9:87:50 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-395572 Clientid:01:52:54:00:e9:87:50}
	I0205 02:04:16.041672   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined IP address 192.168.39.234 and MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:16.041826   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHPort
	I0205 02:04:16.042001   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHKeyPath
	I0205 02:04:16.042148   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHKeyPath
	I0205 02:04:16.042315   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHUsername
	I0205 02:04:16.042489   20618 main.go:141] libmachine: Using SSH client type: native
	I0205 02:04:16.042689   20618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0205 02:04:16.042703   20618 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0205 02:04:16.137764   20618 main.go:141] libmachine: SSH cmd err, output: <nil>: 1738721056.115645189
	
	I0205 02:04:16.137786   20618 fix.go:216] guest clock: 1738721056.115645189
	I0205 02:04:16.137793   20618 fix.go:229] Guest: 2025-02-05 02:04:16.115645189 +0000 UTC Remote: 2025-02-05 02:04:16.039268433 +0000 UTC m=+26.256287805 (delta=76.376756ms)
	I0205 02:04:16.137830   20618 fix.go:200] guest clock delta is within tolerance: 76.376756ms
	I0205 02:04:16.137835   20618 start.go:83] releasing machines lock for "addons-395572", held for 26.246859113s
	I0205 02:04:16.137866   20618 main.go:141] libmachine: (addons-395572) Calling .DriverName
	I0205 02:04:16.138106   20618 main.go:141] libmachine: (addons-395572) Calling .GetIP
	I0205 02:04:16.140482   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:16.140767   20618 main.go:141] libmachine: (addons-395572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:87:50", ip: ""} in network mk-addons-395572: {Iface:virbr1 ExpiryTime:2025-02-05 03:04:05 +0000 UTC Type:0 Mac:52:54:00:e9:87:50 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-395572 Clientid:01:52:54:00:e9:87:50}
	I0205 02:04:16.140794   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined IP address 192.168.39.234 and MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:16.140930   20618 main.go:141] libmachine: (addons-395572) Calling .DriverName
	I0205 02:04:16.141392   20618 main.go:141] libmachine: (addons-395572) Calling .DriverName
	I0205 02:04:16.141555   20618 main.go:141] libmachine: (addons-395572) Calling .DriverName
	I0205 02:04:16.141650   20618 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0205 02:04:16.141697   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHHostname
	I0205 02:04:16.141727   20618 ssh_runner.go:195] Run: cat /version.json
	I0205 02:04:16.141748   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHHostname
	I0205 02:04:16.144174   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:16.144460   20618 main.go:141] libmachine: (addons-395572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:87:50", ip: ""} in network mk-addons-395572: {Iface:virbr1 ExpiryTime:2025-02-05 03:04:05 +0000 UTC Type:0 Mac:52:54:00:e9:87:50 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-395572 Clientid:01:52:54:00:e9:87:50}
	I0205 02:04:16.144487   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined IP address 192.168.39.234 and MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:16.144569   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:16.144638   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHPort
	I0205 02:04:16.144819   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHKeyPath
	I0205 02:04:16.144959   20618 main.go:141] libmachine: (addons-395572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:87:50", ip: ""} in network mk-addons-395572: {Iface:virbr1 ExpiryTime:2025-02-05 03:04:05 +0000 UTC Type:0 Mac:52:54:00:e9:87:50 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-395572 Clientid:01:52:54:00:e9:87:50}
	I0205 02:04:16.144986   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined IP address 192.168.39.234 and MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:16.144987   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHUsername
	I0205 02:04:16.145170   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHPort
	I0205 02:04:16.145163   20618 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/addons-395572/id_rsa Username:docker}
	I0205 02:04:16.145306   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHKeyPath
	I0205 02:04:16.145442   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHUsername
	I0205 02:04:16.145598   20618 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/addons-395572/id_rsa Username:docker}
	I0205 02:04:16.240893   20618 ssh_runner.go:195] Run: systemctl --version
	I0205 02:04:16.246614   20618 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0205 02:04:16.398053   20618 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0205 02:04:16.403610   20618 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0205 02:04:16.403685   20618 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0205 02:04:16.418698   20618 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0205 02:04:16.418722   20618 start.go:495] detecting cgroup driver to use...
	I0205 02:04:16.418788   20618 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0205 02:04:16.433773   20618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0205 02:04:16.447410   20618 docker.go:217] disabling cri-docker service (if available) ...
	I0205 02:04:16.447472   20618 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0205 02:04:16.460047   20618 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0205 02:04:16.472683   20618 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0205 02:04:16.582110   20618 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0205 02:04:16.750379   20618 docker.go:233] disabling docker service ...
	I0205 02:04:16.750448   20618 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0205 02:04:16.763792   20618 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0205 02:04:16.776533   20618 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0205 02:04:16.893603   20618 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0205 02:04:17.011492   20618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0205 02:04:17.025378   20618 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0205 02:04:17.042794   20618 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0205 02:04:17.042874   20618 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 02:04:17.052760   20618 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0205 02:04:17.052834   20618 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 02:04:17.062698   20618 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 02:04:17.072318   20618 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 02:04:17.082020   20618 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0205 02:04:17.091791   20618 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 02:04:17.101320   20618 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 02:04:17.117450   20618 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 02:04:17.127264   20618 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0205 02:04:17.135887   20618 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0205 02:04:17.135962   20618 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0205 02:04:17.148000   20618 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0205 02:04:17.157029   20618 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0205 02:04:17.285164   20618 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0205 02:04:17.431323   20618 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0205 02:04:17.431404   20618 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0205 02:04:17.435829   20618 start.go:563] Will wait 60s for crictl version
	I0205 02:04:17.435895   20618 ssh_runner.go:195] Run: which crictl
	I0205 02:04:17.439893   20618 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0205 02:04:17.477285   20618 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0205 02:04:17.477444   20618 ssh_runner.go:195] Run: crio --version
	I0205 02:04:17.505415   20618 ssh_runner.go:195] Run: crio --version
	I0205 02:04:17.535135   20618 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0205 02:04:17.536336   20618 main.go:141] libmachine: (addons-395572) Calling .GetIP
	I0205 02:04:17.538588   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:17.538872   20618 main.go:141] libmachine: (addons-395572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:87:50", ip: ""} in network mk-addons-395572: {Iface:virbr1 ExpiryTime:2025-02-05 03:04:05 +0000 UTC Type:0 Mac:52:54:00:e9:87:50 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-395572 Clientid:01:52:54:00:e9:87:50}
	I0205 02:04:17.538907   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined IP address 192.168.39.234 and MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:17.539078   20618 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0205 02:04:17.542911   20618 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0205 02:04:17.554686   20618 kubeadm.go:883] updating cluster {Name:addons-395572 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-395572 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0205 02:04:17.554847   20618 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0205 02:04:17.554891   20618 ssh_runner.go:195] Run: sudo crictl images --output json
	I0205 02:04:17.587610   20618 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0205 02:04:17.587673   20618 ssh_runner.go:195] Run: which lz4
	I0205 02:04:17.591414   20618 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0205 02:04:17.595375   20618 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0205 02:04:17.595412   20618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0205 02:04:18.730213   20618 crio.go:462] duration metric: took 1.138827209s to copy over tarball
	I0205 02:04:18.730281   20618 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0205 02:04:20.897006   20618 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.16669809s)
	I0205 02:04:20.897039   20618 crio.go:469] duration metric: took 2.166797481s to extract the tarball
	I0205 02:04:20.897049   20618 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0205 02:04:20.933760   20618 ssh_runner.go:195] Run: sudo crictl images --output json
	I0205 02:04:20.972876   20618 crio.go:514] all images are preloaded for cri-o runtime.
	I0205 02:04:20.972901   20618 cache_images.go:84] Images are preloaded, skipping loading
	I0205 02:04:20.972909   20618 kubeadm.go:934] updating node { 192.168.39.234 8443 v1.32.1 crio true true} ...
	I0205 02:04:20.972999   20618 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-395572 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.234
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:addons-395572 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0205 02:04:20.973059   20618 ssh_runner.go:195] Run: crio config
	I0205 02:04:21.025137   20618 cni.go:84] Creating CNI manager for ""
	I0205 02:04:21.025160   20618 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0205 02:04:21.025170   20618 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0205 02:04:21.025189   20618 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.234 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-395572 NodeName:addons-395572 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.234"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.234 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0205 02:04:21.025308   20618 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.234
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-395572"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.234"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.234"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0205 02:04:21.025390   20618 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0205 02:04:21.034746   20618 binaries.go:44] Found k8s binaries, skipping transfer
	I0205 02:04:21.034827   20618 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0205 02:04:21.043505   20618 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0205 02:04:21.058467   20618 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0205 02:04:21.072843   20618 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I0205 02:04:21.087353   20618 ssh_runner.go:195] Run: grep 192.168.39.234	control-plane.minikube.internal$ /etc/hosts
	I0205 02:04:21.090529   20618 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.234	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0205 02:04:21.101225   20618 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0205 02:04:21.225513   20618 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0205 02:04:21.242548   20618 certs.go:68] Setting up /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572 for IP: 192.168.39.234
	I0205 02:04:21.242582   20618 certs.go:194] generating shared ca certs ...
	I0205 02:04:21.242605   20618 certs.go:226] acquiring lock for ca certs: {Name:mkca3cbb562bd1b517883c3b07fce8acc6cc9038 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 02:04:21.242787   20618 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20363-12788/.minikube/ca.key
	I0205 02:04:21.332769   20618 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20363-12788/.minikube/ca.crt ...
	I0205 02:04:21.332800   20618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/ca.crt: {Name:mk7394d1c106e73cead7f6332994390401d0e098 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 02:04:21.332968   20618 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20363-12788/.minikube/ca.key ...
	I0205 02:04:21.332980   20618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/ca.key: {Name:mka31629c9aa4aa165ccf4f0ff8beaacbeb76edc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 02:04:21.333046   20618 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20363-12788/.minikube/proxy-client-ca.key
	I0205 02:04:21.393807   20618 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20363-12788/.minikube/proxy-client-ca.crt ...
	I0205 02:04:21.393838   20618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/proxy-client-ca.crt: {Name:mkf17ca5ef3a49b417e75594781761f7049d4e82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 02:04:21.394000   20618 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20363-12788/.minikube/proxy-client-ca.key ...
	I0205 02:04:21.394011   20618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/proxy-client-ca.key: {Name:mk13850720d5c5934183cbe41de16bbc099d8724 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 02:04:21.394075   20618 certs.go:256] generating profile certs ...
	I0205 02:04:21.394125   20618 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/client.key
	I0205 02:04:21.394139   20618 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/client.crt with IP's: []
	I0205 02:04:21.627613   20618 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/client.crt ...
	I0205 02:04:21.627645   20618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/client.crt: {Name:mka74f6a05451d8253922240a704fc510d773d56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 02:04:21.627800   20618 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/client.key ...
	I0205 02:04:21.627810   20618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/client.key: {Name:mk60a4218a701775fac91a5171685ff9bd85b17e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 02:04:21.627872   20618 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/apiserver.key.5553df50
	I0205 02:04:21.627889   20618 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/apiserver.crt.5553df50 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.234]
	I0205 02:04:21.888394   20618 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/apiserver.crt.5553df50 ...
	I0205 02:04:21.888434   20618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/apiserver.crt.5553df50: {Name:mka27bb13cd2af3a3326286dae0d47c568f2013e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 02:04:21.888630   20618 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/apiserver.key.5553df50 ...
	I0205 02:04:21.888645   20618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/apiserver.key.5553df50: {Name:mk0e87bfdb5c9cd6da529db6659f4b3c72228713 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 02:04:21.888743   20618 certs.go:381] copying /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/apiserver.crt.5553df50 -> /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/apiserver.crt
	I0205 02:04:21.888819   20618 certs.go:385] copying /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/apiserver.key.5553df50 -> /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/apiserver.key
	I0205 02:04:21.888867   20618 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/proxy-client.key
	I0205 02:04:21.888883   20618 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/proxy-client.crt with IP's: []
	I0205 02:04:22.153921   20618 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/proxy-client.crt ...
	I0205 02:04:22.153952   20618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/proxy-client.crt: {Name:mk9c2ab0308f4aa852cc655833b9fc18ba64b6c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 02:04:22.154125   20618 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/proxy-client.key ...
	I0205 02:04:22.154139   20618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/proxy-client.key: {Name:mkabdaddbf43c635cdfe3f358866bdfd2b0c026d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 02:04:22.154347   20618 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca-key.pem (1679 bytes)
	I0205 02:04:22.154385   20618 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem (1082 bytes)
	I0205 02:04:22.154410   20618 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/cert.pem (1123 bytes)
	I0205 02:04:22.154440   20618 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/key.pem (1675 bytes)
	I0205 02:04:22.154947   20618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0205 02:04:22.186198   20618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0205 02:04:22.207435   20618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0205 02:04:22.228315   20618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0205 02:04:22.259549   20618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0205 02:04:22.280439   20618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I0205 02:04:22.301428   20618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0205 02:04:22.321439   20618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0205 02:04:22.341587   20618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0205 02:04:22.361499   20618 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0205 02:04:22.375426   20618 ssh_runner.go:195] Run: openssl version
	I0205 02:04:22.380397   20618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0205 02:04:22.389421   20618 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0205 02:04:22.393530   20618 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb  5 02:04 /usr/share/ca-certificates/minikubeCA.pem
	I0205 02:04:22.393573   20618 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0205 02:04:22.398708   20618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0205 02:04:22.409335   20618 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0205 02:04:22.412965   20618 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0205 02:04:22.413011   20618 kubeadm.go:392] StartCluster: {Name:addons-395572 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-395572 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0205 02:04:22.413074   20618 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0205 02:04:22.413102   20618 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0205 02:04:22.449651   20618 cri.go:89] found id: ""
	I0205 02:04:22.449733   20618 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0205 02:04:22.458815   20618 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0205 02:04:22.468643   20618 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0205 02:04:22.479632   20618 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0205 02:04:22.479657   20618 kubeadm.go:157] found existing configuration files:
	
	I0205 02:04:22.479703   20618 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0205 02:04:22.488094   20618 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0205 02:04:22.488162   20618 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0205 02:04:22.496729   20618 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0205 02:04:22.505107   20618 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0205 02:04:22.505170   20618 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0205 02:04:22.514168   20618 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0205 02:04:22.523095   20618 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0205 02:04:22.523168   20618 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0205 02:04:22.532551   20618 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0205 02:04:22.541547   20618 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0205 02:04:22.541616   20618 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0205 02:04:22.551166   20618 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0205 02:04:22.604275   20618 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0205 02:04:22.604355   20618 kubeadm.go:310] [preflight] Running pre-flight checks
	I0205 02:04:22.703031   20618 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0205 02:04:22.703195   20618 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0205 02:04:22.703315   20618 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0205 02:04:22.711974   20618 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0205 02:04:22.823959   20618 out.go:235]   - Generating certificates and keys ...
	I0205 02:04:22.824100   20618 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0205 02:04:22.824163   20618 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0205 02:04:23.102060   20618 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0205 02:04:23.303488   20618 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0205 02:04:23.449256   20618 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0205 02:04:23.574197   20618 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0205 02:04:23.663604   20618 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0205 02:04:23.663719   20618 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-395572 localhost] and IPs [192.168.39.234 127.0.0.1 ::1]
	I0205 02:04:23.766141   20618 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0205 02:04:23.766340   20618 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-395572 localhost] and IPs [192.168.39.234 127.0.0.1 ::1]
	I0205 02:04:24.049872   20618 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0205 02:04:24.192311   20618 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0205 02:04:24.506146   20618 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0205 02:04:24.506249   20618 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0205 02:04:24.908061   20618 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0205 02:04:25.190782   20618 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0205 02:04:25.292244   20618 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0205 02:04:25.461439   20618 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0205 02:04:25.588815   20618 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0205 02:04:25.589358   20618 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0205 02:04:25.591615   20618 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0205 02:04:25.593570   20618 out.go:235]   - Booting up control plane ...
	I0205 02:04:25.593699   20618 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0205 02:04:25.593800   20618 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0205 02:04:25.593904   20618 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0205 02:04:25.611564   20618 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0205 02:04:25.617199   20618 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0205 02:04:25.617267   20618 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0205 02:04:25.747203   20618 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0205 02:04:25.747357   20618 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0205 02:04:26.747880   20618 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001301256s
	I0205 02:04:26.747983   20618 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0205 02:04:31.252249   20618 kubeadm.go:310] [api-check] The API server is healthy after 4.504916728s
	I0205 02:04:31.674604   20618 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0205 02:04:31.692769   20618 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0205 02:04:31.719710   20618 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0205 02:04:31.719880   20618 kubeadm.go:310] [mark-control-plane] Marking the node addons-395572 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0205 02:04:31.730806   20618 kubeadm.go:310] [bootstrap-token] Using token: u0d8gj.yhprubztlrxf2ims
	I0205 02:04:31.732121   20618 out.go:235]   - Configuring RBAC rules ...
	I0205 02:04:31.732280   20618 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0205 02:04:31.738675   20618 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0205 02:04:31.756220   20618 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0205 02:04:31.760988   20618 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0205 02:04:31.768375   20618 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0205 02:04:31.773117   20618 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0205 02:04:31.889286   20618 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0205 02:04:32.319265   20618 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0205 02:04:32.891492   20618 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0205 02:04:32.892142   20618 kubeadm.go:310] 
	I0205 02:04:32.892201   20618 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0205 02:04:32.892213   20618 kubeadm.go:310] 
	I0205 02:04:32.892290   20618 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0205 02:04:32.892301   20618 kubeadm.go:310] 
	I0205 02:04:32.892333   20618 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0205 02:04:32.892390   20618 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0205 02:04:32.892451   20618 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0205 02:04:32.892459   20618 kubeadm.go:310] 
	I0205 02:04:32.892550   20618 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0205 02:04:32.892570   20618 kubeadm.go:310] 
	I0205 02:04:32.892644   20618 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0205 02:04:32.892654   20618 kubeadm.go:310] 
	I0205 02:04:32.892729   20618 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0205 02:04:32.892840   20618 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0205 02:04:32.892938   20618 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0205 02:04:32.892954   20618 kubeadm.go:310] 
	I0205 02:04:32.893051   20618 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0205 02:04:32.893153   20618 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0205 02:04:32.893163   20618 kubeadm.go:310] 
	I0205 02:04:32.893287   20618 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token u0d8gj.yhprubztlrxf2ims \
	I0205 02:04:32.893473   20618 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d4171302552bdc887375c8f8e3004ed013421bed3bb373f5cbe18dbecc68169 \
	I0205 02:04:32.893518   20618 kubeadm.go:310] 	--control-plane 
	I0205 02:04:32.893531   20618 kubeadm.go:310] 
	I0205 02:04:32.893650   20618 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0205 02:04:32.893662   20618 kubeadm.go:310] 
	I0205 02:04:32.893793   20618 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token u0d8gj.yhprubztlrxf2ims \
	I0205 02:04:32.893931   20618 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d4171302552bdc887375c8f8e3004ed013421bed3bb373f5cbe18dbecc68169 
	I0205 02:04:32.894077   20618 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0205 02:04:32.894110   20618 cni.go:84] Creating CNI manager for ""
	I0205 02:04:32.894123   20618 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0205 02:04:32.896441   20618 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0205 02:04:32.897924   20618 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0205 02:04:32.908650   20618 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0205 02:04:32.929734   20618 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0205 02:04:32.929814   20618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 02:04:32.929826   20618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-395572 minikube.k8s.io/updated_at=2025_02_05T02_04_32_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=e14c106b6cd223b80b8f7425af26284a33f75c3d minikube.k8s.io/name=addons-395572 minikube.k8s.io/primary=true
	I0205 02:04:32.944891   20618 ops.go:34] apiserver oom_adj: -16
	I0205 02:04:33.083336   20618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 02:04:33.583491   20618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 02:04:34.084328   20618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 02:04:34.583702   20618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 02:04:35.083511   20618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 02:04:35.583482   20618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 02:04:36.083392   20618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 02:04:36.584014   20618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 02:04:37.083936   20618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 02:04:37.184470   20618 kubeadm.go:1113] duration metric: took 4.254712161s to wait for elevateKubeSystemPrivileges
	I0205 02:04:37.184505   20618 kubeadm.go:394] duration metric: took 14.771497025s to StartCluster
	I0205 02:04:37.184522   20618 settings.go:142] acquiring lock: {Name:mk2eca847da5ba78f5b041a83e5cfcbdebb0c621 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 02:04:37.184647   20618 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20363-12788/kubeconfig
	I0205 02:04:37.185019   20618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/kubeconfig: {Name:mkb405c4292c681fd728af1af9684132d7e45754 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 02:04:37.185233   20618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0205 02:04:37.185242   20618 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.234 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0205 02:04:37.185328   20618 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0205 02:04:37.185438   20618 config.go:182] Loaded profile config "addons-395572": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 02:04:37.185451   20618 addons.go:69] Setting yakd=true in profile "addons-395572"
	I0205 02:04:37.185475   20618 addons.go:69] Setting storage-provisioner=true in profile "addons-395572"
	I0205 02:04:37.185482   20618 addons.go:238] Setting addon yakd=true in "addons-395572"
	I0205 02:04:37.185486   20618 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-395572"
	I0205 02:04:37.185495   20618 addons.go:238] Setting addon storage-provisioner=true in "addons-395572"
	I0205 02:04:37.185496   20618 addons.go:69] Setting volcano=true in profile "addons-395572"
	I0205 02:04:37.185511   20618 addons.go:69] Setting cloud-spanner=true in profile "addons-395572"
	I0205 02:04:37.185514   20618 host.go:66] Checking if "addons-395572" exists ...
	I0205 02:04:37.185508   20618 addons.go:69] Setting volumesnapshots=true in profile "addons-395572"
	I0205 02:04:37.185520   20618 addons.go:238] Setting addon volcano=true in "addons-395572"
	I0205 02:04:37.185528   20618 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-395572"
	I0205 02:04:37.185530   20618 addons.go:238] Setting addon volumesnapshots=true in "addons-395572"
	I0205 02:04:37.185534   20618 addons.go:69] Setting metrics-server=true in profile "addons-395572"
	I0205 02:04:37.185548   20618 addons.go:238] Setting addon metrics-server=true in "addons-395572"
	I0205 02:04:37.185549   20618 host.go:66] Checking if "addons-395572" exists ...
	I0205 02:04:37.185565   20618 host.go:66] Checking if "addons-395572" exists ...
	I0205 02:04:37.185570   20618 addons.go:69] Setting default-storageclass=true in profile "addons-395572"
	I0205 02:04:37.185577   20618 addons.go:69] Setting ingress=true in profile "addons-395572"
	I0205 02:04:37.185586   20618 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-395572"
	I0205 02:04:37.185594   20618 addons.go:69] Setting gcp-auth=true in profile "addons-395572"
	I0205 02:04:37.185610   20618 mustload.go:65] Loading cluster: addons-395572
	I0205 02:04:37.185777   20618 config.go:182] Loaded profile config "addons-395572": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 02:04:37.185797   20618 addons.go:69] Setting ingress-dns=true in profile "addons-395572"
	I0205 02:04:37.185819   20618 addons.go:238] Setting addon ingress-dns=true in "addons-395572"
	I0205 02:04:37.185864   20618 host.go:66] Checking if "addons-395572" exists ...
	I0205 02:04:37.186002   20618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:04:37.186004   20618 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-395572"
	I0205 02:04:37.186011   20618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:04:37.186022   20618 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-395572"
	I0205 02:04:37.186030   20618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:04:37.186030   20618 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-395572"
	I0205 02:04:37.186045   20618 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-395572"
	I0205 02:04:37.186056   20618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:04:37.186067   20618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:04:37.186076   20618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:04:37.186101   20618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:04:37.186105   20618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:04:37.186125   20618 host.go:66] Checking if "addons-395572" exists ...
	I0205 02:04:37.185589   20618 addons.go:238] Setting addon ingress=true in "addons-395572"
	I0205 02:04:37.185569   20618 host.go:66] Checking if "addons-395572" exists ...
	I0205 02:04:37.186136   20618 addons.go:69] Setting registry=true in profile "addons-395572"
	I0205 02:04:37.185506   20618 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-395572"
	I0205 02:04:37.186151   20618 addons.go:238] Setting addon registry=true in "addons-395572"
	I0205 02:04:37.185529   20618 host.go:66] Checking if "addons-395572" exists ...
	I0205 02:04:37.186177   20618 host.go:66] Checking if "addons-395572" exists ...
	I0205 02:04:37.186219   20618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:04:37.186251   20618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:04:37.186023   20618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:04:37.185565   20618 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-395572"
	I0205 02:04:37.186374   20618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:04:37.186390   20618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:04:37.186436   20618 host.go:66] Checking if "addons-395572" exists ...
	I0205 02:04:37.185471   20618 addons.go:69] Setting inspektor-gadget=true in profile "addons-395572"
	I0205 02:04:37.186497   20618 addons.go:238] Setting addon inspektor-gadget=true in "addons-395572"
	I0205 02:04:37.185521   20618 addons.go:238] Setting addon cloud-spanner=true in "addons-395572"
	I0205 02:04:37.186128   20618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:04:37.186678   20618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:04:37.186702   20618 host.go:66] Checking if "addons-395572" exists ...
	I0205 02:04:37.186714   20618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:04:37.186735   20618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:04:37.186752   20618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:04:37.186762   20618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:04:37.186762   20618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:04:37.186780   20618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:04:37.186783   20618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:04:37.186829   20618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:04:37.186846   20618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:04:37.186906   20618 host.go:66] Checking if "addons-395572" exists ...
	I0205 02:04:37.187093   20618 host.go:66] Checking if "addons-395572" exists ...
	I0205 02:04:37.187289   20618 host.go:66] Checking if "addons-395572" exists ...
	I0205 02:04:37.187796   20618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:04:37.187841   20618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:04:37.217485   20618 out.go:177] * Verifying Kubernetes components...
	I0205 02:04:37.217842   20618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:04:37.217872   20618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:04:37.218025   20618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:04:37.218066   20618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:04:37.218944   20618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44999
	I0205 02:04:37.219211   20618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42729
	I0205 02:04:37.219460   20618 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0205 02:04:37.217527   20618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38597
	I0205 02:04:37.219649   20618 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:04:37.220310   20618 main.go:141] libmachine: Using API Version  1
	I0205 02:04:37.220329   20618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:04:37.220398   20618 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:04:37.220473   20618 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:04:37.220834   20618 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:04:37.220967   20618 main.go:141] libmachine: Using API Version  1
	I0205 02:04:37.220977   20618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:04:37.221242   20618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:04:37.221262   20618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:04:37.222697   20618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:04:37.222741   20618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:04:37.229811   20618 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:04:37.229985   20618 main.go:141] libmachine: Using API Version  1
	I0205 02:04:37.230002   20618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:04:37.230099   20618 main.go:141] libmachine: (addons-395572) Calling .GetState
	I0205 02:04:37.230367   20618 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:04:37.230898   20618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:04:37.230946   20618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:04:37.238885   20618 addons.go:238] Setting addon default-storageclass=true in "addons-395572"
	I0205 02:04:37.238928   20618 host.go:66] Checking if "addons-395572" exists ...
	I0205 02:04:37.239301   20618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:04:37.239336   20618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:04:37.249673   20618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34315
	I0205 02:04:37.250317   20618 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:04:37.251319   20618 main.go:141] libmachine: Using API Version  1
	I0205 02:04:37.251347   20618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:04:37.251793   20618 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:04:37.252461   20618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:04:37.252507   20618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:04:37.253814   20618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38479
	I0205 02:04:37.255024   20618 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:04:37.255625   20618 main.go:141] libmachine: Using API Version  1
	I0205 02:04:37.255643   20618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:04:37.256002   20618 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:04:37.256585   20618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:04:37.256625   20618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:04:37.256919   20618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44325
	I0205 02:04:37.257377   20618 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:04:37.257843   20618 main.go:141] libmachine: Using API Version  1
	I0205 02:04:37.257872   20618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:04:37.258175   20618 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:04:37.258736   20618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:04:37.258787   20618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:04:37.262253   20618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36753
	I0205 02:04:37.262817   20618 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:04:37.263312   20618 main.go:141] libmachine: Using API Version  1
	I0205 02:04:37.263341   20618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:04:37.263967   20618 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:04:37.264570   20618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:04:37.264612   20618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:04:37.264664   20618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45015
	I0205 02:04:37.265142   20618 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:04:37.265718   20618 main.go:141] libmachine: Using API Version  1
	I0205 02:04:37.265737   20618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:04:37.266061   20618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40289
	I0205 02:04:37.266390   20618 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:04:37.266559   20618 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:04:37.267072   20618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:04:37.267110   20618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:04:37.267852   20618 main.go:141] libmachine: Using API Version  1
	I0205 02:04:37.267880   20618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:04:37.268349   20618 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:04:37.268624   20618 main.go:141] libmachine: (addons-395572) Calling .GetState
	I0205 02:04:37.270743   20618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36203
	I0205 02:04:37.271298   20618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44311
	I0205 02:04:37.271771   20618 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:04:37.272316   20618 main.go:141] libmachine: Using API Version  1
	I0205 02:04:37.272339   20618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:04:37.272677   20618 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:04:37.273247   20618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:04:37.273295   20618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:04:37.273678   20618 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:04:37.274538   20618 main.go:141] libmachine: Using API Version  1
	I0205 02:04:37.274564   20618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:04:37.275061   20618 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:04:37.275625   20618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:04:37.275663   20618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:04:37.281488   20618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36381
	I0205 02:04:37.281647   20618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45499
	I0205 02:04:37.281753   20618 host.go:66] Checking if "addons-395572" exists ...
	I0205 02:04:37.282524   20618 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:04:37.283107   20618 main.go:141] libmachine: Using API Version  1
	I0205 02:04:37.283125   20618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:04:37.283520   20618 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:04:37.283731   20618 main.go:141] libmachine: (addons-395572) Calling .GetState
	I0205 02:04:37.284337   20618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45765
	I0205 02:04:37.284908   20618 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:04:37.285394   20618 main.go:141] libmachine: Using API Version  1
	I0205 02:04:37.285409   20618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:04:37.286122   20618 main.go:141] libmachine: (addons-395572) Calling .DriverName
	I0205 02:04:37.286131   20618 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:04:37.286713   20618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:04:37.286765   20618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:04:37.287043   20618 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:04:37.289008   20618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36815
	I0205 02:04:37.289523   20618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35459
	I0205 02:04:37.289542   20618 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:04:37.289992   20618 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:04:37.290074   20618 main.go:141] libmachine: Using API Version  1
	I0205 02:04:37.290088   20618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:04:37.290435   20618 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:04:37.290537   20618 main.go:141] libmachine: Using API Version  1
	I0205 02:04:37.290560   20618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:04:37.290618   20618 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0205 02:04:37.291039   20618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:04:37.291073   20618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:04:37.291296   20618 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:04:37.291431   20618 main.go:141] libmachine: (addons-395572) Calling .GetState
	I0205 02:04:37.292458   20618 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0205 02:04:37.292484   20618 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0205 02:04:37.292504   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHHostname
	I0205 02:04:37.293433   20618 main.go:141] libmachine: (addons-395572) Calling .DriverName
	I0205 02:04:37.294017   20618 main.go:141] libmachine: Using API Version  1
	I0205 02:04:37.294037   20618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:04:37.294452   20618 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:04:37.295098   20618 main.go:141] libmachine: (addons-395572) Calling .GetState
	I0205 02:04:37.295178   20618 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0205 02:04:37.295436   20618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44885
	I0205 02:04:37.295927   20618 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:04:37.296208   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:37.296587   20618 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0205 02:04:37.296605   20618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0205 02:04:37.296609   20618 main.go:141] libmachine: Using API Version  1
	I0205 02:04:37.296625   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHHostname
	I0205 02:04:37.296627   20618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:04:37.296929   20618 main.go:141] libmachine: (addons-395572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:87:50", ip: ""} in network mk-addons-395572: {Iface:virbr1 ExpiryTime:2025-02-05 03:04:05 +0000 UTC Type:0 Mac:52:54:00:e9:87:50 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-395572 Clientid:01:52:54:00:e9:87:50}
	I0205 02:04:37.296961   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined IP address 192.168.39.234 and MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:37.297939   20618 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-395572"
	I0205 02:04:37.297979   20618 host.go:66] Checking if "addons-395572" exists ...
	I0205 02:04:37.298350   20618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:04:37.298449   20618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:04:37.299011   20618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:04:37.299047   20618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:04:37.299339   20618 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:04:37.300496   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHPort
	I0205 02:04:37.300556   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:37.300579   20618 main.go:141] libmachine: (addons-395572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:87:50", ip: ""} in network mk-addons-395572: {Iface:virbr1 ExpiryTime:2025-02-05 03:04:05 +0000 UTC Type:0 Mac:52:54:00:e9:87:50 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-395572 Clientid:01:52:54:00:e9:87:50}
	I0205 02:04:37.300596   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined IP address 192.168.39.234 and MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:37.300726   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHKeyPath
	I0205 02:04:37.300827   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHUsername
	I0205 02:04:37.300833   20618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:04:37.300870   20618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:04:37.300950   20618 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/addons-395572/id_rsa Username:docker}
	I0205 02:04:37.301266   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHPort
	I0205 02:04:37.301430   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHKeyPath
	I0205 02:04:37.301543   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHUsername
	I0205 02:04:37.301637   20618 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/addons-395572/id_rsa Username:docker}
	I0205 02:04:37.304064   20618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41139
	I0205 02:04:37.304501   20618 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:04:37.305007   20618 main.go:141] libmachine: Using API Version  1
	I0205 02:04:37.305030   20618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:04:37.305844   20618 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:04:37.306005   20618 main.go:141] libmachine: (addons-395572) Calling .GetState
	I0205 02:04:37.309299   20618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44519
	I0205 02:04:37.309993   20618 main.go:141] libmachine: (addons-395572) Calling .DriverName
	I0205 02:04:37.311587   20618 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0205 02:04:37.311939   20618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41115
	I0205 02:04:37.312181   20618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34023
	I0205 02:04:37.312622   20618 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:04:37.312694   20618 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:04:37.313031   20618 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0205 02:04:37.313051   20618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0205 02:04:37.313069   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHHostname
	I0205 02:04:37.313208   20618 main.go:141] libmachine: Using API Version  1
	I0205 02:04:37.313224   20618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:04:37.313612   20618 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:04:37.313828   20618 main.go:141] libmachine: (addons-395572) Calling .GetState
	I0205 02:04:37.314439   20618 main.go:141] libmachine: Using API Version  1
	I0205 02:04:37.314455   20618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:04:37.314696   20618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34663
	I0205 02:04:37.315519   20618 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:04:37.315910   20618 main.go:141] libmachine: (addons-395572) Calling .GetState
	I0205 02:04:37.316584   20618 main.go:141] libmachine: (addons-395572) Calling .DriverName
	I0205 02:04:37.316937   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:37.316948   20618 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:04:37.317331   20618 main.go:141] libmachine: (addons-395572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:87:50", ip: ""} in network mk-addons-395572: {Iface:virbr1 ExpiryTime:2025-02-05 03:04:05 +0000 UTC Type:0 Mac:52:54:00:e9:87:50 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-395572 Clientid:01:52:54:00:e9:87:50}
	I0205 02:04:37.317412   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined IP address 192.168.39.234 and MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:37.317633   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHPort
	I0205 02:04:37.317729   20618 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:04:37.317736   20618 main.go:141] libmachine: Using API Version  1
	I0205 02:04:37.317752   20618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:04:37.317904   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHKeyPath
	I0205 02:04:37.317948   20618 main.go:141] libmachine: (addons-395572) Calling .DriverName
	I0205 02:04:37.318207   20618 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0205 02:04:37.318269   20618 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:04:37.318318   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHUsername
	I0205 02:04:37.318447   20618 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/addons-395572/id_rsa Username:docker}
	I0205 02:04:37.318739   20618 main.go:141] libmachine: (addons-395572) Calling .GetState
	I0205 02:04:37.319328   20618 main.go:141] libmachine: Using API Version  1
	I0205 02:04:37.319344   20618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:04:37.319433   20618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37625
	I0205 02:04:37.319598   20618 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0205 02:04:37.319622   20618 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0205 02:04:37.319641   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHHostname
	I0205 02:04:37.319754   20618 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0205 02:04:37.319871   20618 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:04:37.320104   20618 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:04:37.320435   20618 main.go:141] libmachine: Using API Version  1
	I0205 02:04:37.320452   20618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:04:37.320862   20618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:04:37.320893   20618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:04:37.321056   20618 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0205 02:04:37.321070   20618 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0205 02:04:37.321089   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHHostname
	I0205 02:04:37.321415   20618 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:04:37.321923   20618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:04:37.321962   20618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:04:37.322178   20618 main.go:141] libmachine: (addons-395572) Calling .DriverName
	I0205 02:04:37.324199   20618 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0205 02:04:37.324608   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:37.325176   20618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42625
	I0205 02:04:37.325586   20618 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0205 02:04:37.325603   20618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0205 02:04:37.325621   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHHostname
	I0205 02:04:37.326262   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHPort
	I0205 02:04:37.326334   20618 main.go:141] libmachine: (addons-395572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:87:50", ip: ""} in network mk-addons-395572: {Iface:virbr1 ExpiryTime:2025-02-05 03:04:05 +0000 UTC Type:0 Mac:52:54:00:e9:87:50 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-395572 Clientid:01:52:54:00:e9:87:50}
	I0205 02:04:37.326352   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined IP address 192.168.39.234 and MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:37.326399   20618 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:04:37.327547   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:37.327692   20618 main.go:141] libmachine: Using API Version  1
	I0205 02:04:37.327704   20618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:04:37.327774   20618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38847
	I0205 02:04:37.327888   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHKeyPath
	I0205 02:04:37.328276   20618 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:04:37.328391   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHUsername
	I0205 02:04:37.328688   20618 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/addons-395572/id_rsa Username:docker}
	I0205 02:04:37.328976   20618 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:04:37.329411   20618 main.go:141] libmachine: (addons-395572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:87:50", ip: ""} in network mk-addons-395572: {Iface:virbr1 ExpiryTime:2025-02-05 03:04:05 +0000 UTC Type:0 Mac:52:54:00:e9:87:50 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-395572 Clientid:01:52:54:00:e9:87:50}
	I0205 02:04:37.329432   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined IP address 192.168.39.234 and MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:37.329468   20618 main.go:141] libmachine: (addons-395572) Calling .GetState
	I0205 02:04:37.329711   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHPort
	I0205 02:04:37.329917   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHKeyPath
	I0205 02:04:37.330104   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHUsername
	I0205 02:04:37.330419   20618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36945
	I0205 02:04:37.330544   20618 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/addons-395572/id_rsa Username:docker}
	I0205 02:04:37.331412   20618 main.go:141] libmachine: Using API Version  1
	I0205 02:04:37.331433   20618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:04:37.331498   20618 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:04:37.332128   20618 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:04:37.332193   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:37.332462   20618 main.go:141] libmachine: (addons-395572) Calling .GetState
	I0205 02:04:37.332518   20618 main.go:141] libmachine: (addons-395572) Calling .DriverName
	I0205 02:04:37.333025   20618 main.go:141] libmachine: Using API Version  1
	I0205 02:04:37.333041   20618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:04:37.333093   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHPort
	I0205 02:04:37.333382   20618 main.go:141] libmachine: (addons-395572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:87:50", ip: ""} in network mk-addons-395572: {Iface:virbr1 ExpiryTime:2025-02-05 03:04:05 +0000 UTC Type:0 Mac:52:54:00:e9:87:50 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-395572 Clientid:01:52:54:00:e9:87:50}
	I0205 02:04:37.333400   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined IP address 192.168.39.234 and MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:37.333552   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHKeyPath
	I0205 02:04:37.333662   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHUsername
	I0205 02:04:37.333772   20618 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/addons-395572/id_rsa Username:docker}
	I0205 02:04:37.334186   20618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42799
	I0205 02:04:37.334481   20618 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:04:37.334968   20618 main.go:141] libmachine: Using API Version  1
	I0205 02:04:37.334987   20618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:04:37.335352   20618 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I0205 02:04:37.335647   20618 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:04:37.335717   20618 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:04:37.335821   20618 main.go:141] libmachine: (addons-395572) Calling .GetState
	I0205 02:04:37.336419   20618 main.go:141] libmachine: (addons-395572) Calling .DriverName
	I0205 02:04:37.336571   20618 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0205 02:04:37.336589   20618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0205 02:04:37.336607   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHHostname
	I0205 02:04:37.336636   20618 main.go:141] libmachine: Making call to close driver server
	I0205 02:04:37.336644   20618 main.go:141] libmachine: (addons-395572) Calling .Close
	I0205 02:04:37.336806   20618 main.go:141] libmachine: (addons-395572) DBG | Closing plugin on server side
	I0205 02:04:37.336825   20618 main.go:141] libmachine: Successfully made call to close driver server
	I0205 02:04:37.336832   20618 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 02:04:37.336839   20618 main.go:141] libmachine: Making call to close driver server
	I0205 02:04:37.336846   20618 main.go:141] libmachine: (addons-395572) Calling .Close
	I0205 02:04:37.337035   20618 main.go:141] libmachine: (addons-395572) DBG | Closing plugin on server side
	I0205 02:04:37.337054   20618 main.go:141] libmachine: Successfully made call to close driver server
	I0205 02:04:37.337062   20618 main.go:141] libmachine: Making call to close connection to plugin binary
	W0205 02:04:37.337135   20618 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0205 02:04:37.337984   20618 main.go:141] libmachine: (addons-395572) Calling .DriverName
	I0205 02:04:37.339877   20618 main.go:141] libmachine: (addons-395572) Calling .DriverName
	I0205 02:04:37.339948   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:37.340479   20618 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I0205 02:04:37.340602   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHPort
	I0205 02:04:37.340677   20618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35695
	I0205 02:04:37.340801   20618 main.go:141] libmachine: (addons-395572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:87:50", ip: ""} in network mk-addons-395572: {Iface:virbr1 ExpiryTime:2025-02-05 03:04:05 +0000 UTC Type:0 Mac:52:54:00:e9:87:50 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-395572 Clientid:01:52:54:00:e9:87:50}
	I0205 02:04:37.340816   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined IP address 192.168.39.234 and MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:37.341659   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHKeyPath
	I0205 02:04:37.341662   20618 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:04:37.341908   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHUsername
	I0205 02:04:37.342995   20618 main.go:141] libmachine: Using API Version  1
	I0205 02:04:37.343016   20618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:04:37.343073   20618 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/addons-395572/id_rsa Username:docker}
	I0205 02:04:37.343181   20618 out.go:177]   - Using image docker.io/registry:2.8.3
	I0205 02:04:37.343721   20618 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:04:37.343970   20618 main.go:141] libmachine: (addons-395572) Calling .GetState
	I0205 02:04:37.344258   20618 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0205 02:04:37.344278   20618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0205 02:04:37.344293   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHHostname
	I0205 02:04:37.345952   20618 main.go:141] libmachine: (addons-395572) Calling .DriverName
	I0205 02:04:37.347664   20618 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I0205 02:04:37.347759   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:37.348212   20618 main.go:141] libmachine: (addons-395572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:87:50", ip: ""} in network mk-addons-395572: {Iface:virbr1 ExpiryTime:2025-02-05 03:04:05 +0000 UTC Type:0 Mac:52:54:00:e9:87:50 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-395572 Clientid:01:52:54:00:e9:87:50}
	I0205 02:04:37.348244   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined IP address 192.168.39.234 and MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:37.348402   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHPort
	I0205 02:04:37.348470   20618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43849
	I0205 02:04:37.348838   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHKeyPath
	I0205 02:04:37.349013   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHUsername
	I0205 02:04:37.349160   20618 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/addons-395572/id_rsa Username:docker}
	I0205 02:04:37.349485   20618 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:04:37.349553   20618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34615
	I0205 02:04:37.349802   20618 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0205 02:04:37.350049   20618 main.go:141] libmachine: Using API Version  1
	I0205 02:04:37.350065   20618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:04:37.350131   20618 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:04:37.350776   20618 main.go:141] libmachine: Using API Version  1
	I0205 02:04:37.350798   20618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:04:37.350845   20618 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:04:37.351001   20618 main.go:141] libmachine: (addons-395572) Calling .GetState
	I0205 02:04:37.351546   20618 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:04:37.351712   20618 main.go:141] libmachine: (addons-395572) Calling .GetState
	I0205 02:04:37.351921   20618 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0205 02:04:37.352038   20618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40503
	I0205 02:04:37.352449   20618 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:04:37.352850   20618 main.go:141] libmachine: Using API Version  1
	I0205 02:04:37.352873   20618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:04:37.353189   20618 main.go:141] libmachine: (addons-395572) Calling .DriverName
	I0205 02:04:37.353255   20618 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:04:37.353502   20618 main.go:141] libmachine: (addons-395572) Calling .GetState
	I0205 02:04:37.353920   20618 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0205 02:04:37.353942   20618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0205 02:04:37.353960   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHHostname
	I0205 02:04:37.354497   20618 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.28
	I0205 02:04:37.355004   20618 main.go:141] libmachine: (addons-395572) Calling .DriverName
	I0205 02:04:37.355289   20618 main.go:141] libmachine: (addons-395572) Calling .DriverName
	I0205 02:04:37.355611   20618 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0205 02:04:37.355625   20618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0205 02:04:37.355642   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHHostname
	I0205 02:04:37.357008   20618 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0205 02:04:37.357120   20618 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.37.0
	I0205 02:04:37.358318   20618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41219
	I0205 02:04:37.358481   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:37.358623   20618 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0205 02:04:37.358640   20618 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0205 02:04:37.358659   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHHostname
	I0205 02:04:37.359120   20618 main.go:141] libmachine: (addons-395572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:87:50", ip: ""} in network mk-addons-395572: {Iface:virbr1 ExpiryTime:2025-02-05 03:04:05 +0000 UTC Type:0 Mac:52:54:00:e9:87:50 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-395572 Clientid:01:52:54:00:e9:87:50}
	I0205 02:04:37.359140   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined IP address 192.168.39.234 and MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:37.359352   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHPort
	I0205 02:04:37.359423   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:37.359463   20618 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:04:37.359555   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHKeyPath
	I0205 02:04:37.359700   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHUsername
	I0205 02:04:37.359756   20618 main.go:141] libmachine: (addons-395572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:87:50", ip: ""} in network mk-addons-395572: {Iface:virbr1 ExpiryTime:2025-02-05 03:04:05 +0000 UTC Type:0 Mac:52:54:00:e9:87:50 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-395572 Clientid:01:52:54:00:e9:87:50}
	I0205 02:04:37.359773   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined IP address 192.168.39.234 and MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:37.359963   20618 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/addons-395572/id_rsa Username:docker}
	I0205 02:04:37.359995   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHPort
	I0205 02:04:37.360172   20618 main.go:141] libmachine: Using API Version  1
	I0205 02:04:37.360185   20618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:04:37.360241   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHKeyPath
	I0205 02:04:37.360288   20618 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0205 02:04:37.360387   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHUsername
	I0205 02:04:37.360517   20618 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/addons-395572/id_rsa Username:docker}
	I0205 02:04:37.360908   20618 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:04:37.361523   20618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:04:37.361562   20618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:04:37.362055   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:37.362531   20618 main.go:141] libmachine: (addons-395572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:87:50", ip: ""} in network mk-addons-395572: {Iface:virbr1 ExpiryTime:2025-02-05 03:04:05 +0000 UTC Type:0 Mac:52:54:00:e9:87:50 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-395572 Clientid:01:52:54:00:e9:87:50}
	I0205 02:04:37.362554   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined IP address 192.168.39.234 and MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:37.362751   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHPort
	I0205 02:04:37.362917   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHKeyPath
	I0205 02:04:37.363030   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHUsername
	I0205 02:04:37.363088   20618 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0205 02:04:37.363237   20618 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/addons-395572/id_rsa Username:docker}
	I0205 02:04:37.365446   20618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41613
	I0205 02:04:37.365791   20618 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0205 02:04:37.365871   20618 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:04:37.366373   20618 main.go:141] libmachine: Using API Version  1
	I0205 02:04:37.366390   20618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:04:37.366819   20618 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:04:37.367077   20618 main.go:141] libmachine: (addons-395572) Calling .GetState
	I0205 02:04:37.368101   20618 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0205 02:04:37.368706   20618 main.go:141] libmachine: (addons-395572) Calling .DriverName
	I0205 02:04:37.368984   20618 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0205 02:04:37.369000   20618 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0205 02:04:37.369015   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHHostname
	I0205 02:04:37.370309   20618 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0205 02:04:37.371490   20618 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0205 02:04:37.372037   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:37.372556   20618 main.go:141] libmachine: (addons-395572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:87:50", ip: ""} in network mk-addons-395572: {Iface:virbr1 ExpiryTime:2025-02-05 03:04:05 +0000 UTC Type:0 Mac:52:54:00:e9:87:50 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-395572 Clientid:01:52:54:00:e9:87:50}
	I0205 02:04:37.372594   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined IP address 192.168.39.234 and MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:37.372750   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHPort
	I0205 02:04:37.372902   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHKeyPath
	I0205 02:04:37.373033   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHUsername
	I0205 02:04:37.373153   20618 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/addons-395572/id_rsa Username:docker}
	I0205 02:04:37.373924   20618 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0205 02:04:37.374980   20618 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0205 02:04:37.374999   20618 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0205 02:04:37.375019   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHHostname
	I0205 02:04:37.378348   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:37.378792   20618 main.go:141] libmachine: (addons-395572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:87:50", ip: ""} in network mk-addons-395572: {Iface:virbr1 ExpiryTime:2025-02-05 03:04:05 +0000 UTC Type:0 Mac:52:54:00:e9:87:50 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-395572 Clientid:01:52:54:00:e9:87:50}
	I0205 02:04:37.378822   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined IP address 192.168.39.234 and MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:37.379025   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHPort
	I0205 02:04:37.379222   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHKeyPath
	I0205 02:04:37.379404   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHUsername
	I0205 02:04:37.379557   20618 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/addons-395572/id_rsa Username:docker}
	I0205 02:04:37.379849   20618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39367
	I0205 02:04:37.380273   20618 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:04:37.380638   20618 main.go:141] libmachine: Using API Version  1
	I0205 02:04:37.380656   20618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:04:37.380968   20618 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:04:37.381124   20618 main.go:141] libmachine: (addons-395572) Calling .GetState
	I0205 02:04:37.382512   20618 main.go:141] libmachine: (addons-395572) Calling .DriverName
	I0205 02:04:37.384226   20618 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0205 02:04:37.385599   20618 out.go:177]   - Using image docker.io/busybox:stable
	I0205 02:04:37.386969   20618 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0205 02:04:37.386995   20618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0205 02:04:37.387019   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHHostname
	I0205 02:04:37.389854   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:37.390191   20618 main.go:141] libmachine: (addons-395572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:87:50", ip: ""} in network mk-addons-395572: {Iface:virbr1 ExpiryTime:2025-02-05 03:04:05 +0000 UTC Type:0 Mac:52:54:00:e9:87:50 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-395572 Clientid:01:52:54:00:e9:87:50}
	I0205 02:04:37.390222   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined IP address 192.168.39.234 and MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:37.390394   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHPort
	I0205 02:04:37.390589   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHKeyPath
	I0205 02:04:37.390721   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHUsername
	I0205 02:04:37.390838   20618 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/addons-395572/id_rsa Username:docker}
	I0205 02:04:37.647834   20618 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0205 02:04:37.647966   20618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0205 02:04:37.681166   20618 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0205 02:04:37.681193   20618 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0205 02:04:37.720725   20618 node_ready.go:35] waiting up to 6m0s for node "addons-395572" to be "Ready" ...
	I0205 02:04:37.724440   20618 node_ready.go:49] node "addons-395572" has status "Ready":"True"
	I0205 02:04:37.724463   20618 node_ready.go:38] duration metric: took 3.712483ms for node "addons-395572" to be "Ready" ...
	I0205 02:04:37.724472   20618 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0205 02:04:37.730920   20618 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-c4gpv" in "kube-system" namespace to be "Ready" ...
	I0205 02:04:37.787816   20618 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0205 02:04:37.787847   20618 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0205 02:04:37.814336   20618 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0205 02:04:37.814370   20618 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0205 02:04:37.827794   20618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0205 02:04:37.834858   20618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0205 02:04:37.861421   20618 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0205 02:04:37.861444   20618 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0205 02:04:37.932209   20618 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0205 02:04:37.932234   20618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0205 02:04:37.961688   20618 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0205 02:04:37.961716   20618 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0205 02:04:37.966011   20618 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0205 02:04:37.966042   20618 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0205 02:04:37.967804   20618 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0205 02:04:37.967831   20618 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0205 02:04:37.986341   20618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0205 02:04:37.990076   20618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0205 02:04:38.000710   20618 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0205 02:04:38.000742   20618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0205 02:04:38.005936   20618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0205 02:04:38.018777   20618 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0205 02:04:38.018809   20618 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0205 02:04:38.024329   20618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0205 02:04:38.025379   20618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0205 02:04:38.035358   20618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0205 02:04:38.121301   20618 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0205 02:04:38.121324   20618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0205 02:04:38.126033   20618 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0205 02:04:38.126051   20618 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0205 02:04:38.136969   20618 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0205 02:04:38.136994   20618 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0205 02:04:38.142176   20618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0205 02:04:38.176274   20618 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0205 02:04:38.176299   20618 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0205 02:04:38.182017   20618 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0205 02:04:38.182036   20618 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0205 02:04:38.327859   20618 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0205 02:04:38.327887   20618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0205 02:04:38.355803   20618 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0205 02:04:38.355830   20618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0205 02:04:38.379642   20618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0205 02:04:38.383409   20618 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0205 02:04:38.383435   20618 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0205 02:04:38.423055   20618 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0205 02:04:38.423099   20618 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0205 02:04:38.491428   20618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0205 02:04:38.512355   20618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0205 02:04:38.676181   20618 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0205 02:04:38.676219   20618 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0205 02:04:38.691038   20618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0205 02:04:39.199825   20618 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0205 02:04:39.199850   20618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0205 02:04:39.562412   20618 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0205 02:04:39.562445   20618 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0205 02:04:39.756360   20618 pod_ready.go:103] pod "coredns-668d6bf9bc-c4gpv" in "kube-system" namespace has status "Ready":"False"
	I0205 02:04:39.836724   20618 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0205 02:04:39.836746   20618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0205 02:04:39.973184   20618 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.32517519s)
	I0205 02:04:39.973223   20618 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0205 02:04:40.077314   20618 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0205 02:04:40.077355   20618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0205 02:04:40.380549   20618 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0205 02:04:40.380576   20618 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0205 02:04:40.478622   20618 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-395572" context rescaled to 1 replicas
	I0205 02:04:40.690724   20618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0205 02:04:42.236461   20618 pod_ready.go:103] pod "coredns-668d6bf9bc-c4gpv" in "kube-system" namespace has status "Ready":"False"
	I0205 02:04:44.177999   20618 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0205 02:04:44.178042   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHHostname
	I0205 02:04:44.180705   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:44.181133   20618 main.go:141] libmachine: (addons-395572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:87:50", ip: ""} in network mk-addons-395572: {Iface:virbr1 ExpiryTime:2025-02-05 03:04:05 +0000 UTC Type:0 Mac:52:54:00:e9:87:50 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-395572 Clientid:01:52:54:00:e9:87:50}
	I0205 02:04:44.181166   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined IP address 192.168.39.234 and MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:44.181361   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHPort
	I0205 02:04:44.181591   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHKeyPath
	I0205 02:04:44.181758   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHUsername
	I0205 02:04:44.181911   20618 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/addons-395572/id_rsa Username:docker}
	I0205 02:04:44.313334   20618 pod_ready.go:103] pod "coredns-668d6bf9bc-c4gpv" in "kube-system" namespace has status "Ready":"False"
	I0205 02:04:44.781038   20618 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0205 02:04:44.943999   20618 addons.go:238] Setting addon gcp-auth=true in "addons-395572"
	I0205 02:04:44.944047   20618 host.go:66] Checking if "addons-395572" exists ...
	I0205 02:04:44.944332   20618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:04:44.944374   20618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:04:44.959976   20618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32935
	I0205 02:04:44.960410   20618 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:04:44.960834   20618 main.go:141] libmachine: Using API Version  1
	I0205 02:04:44.960859   20618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:04:44.961177   20618 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:04:44.961707   20618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:04:44.961749   20618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:04:44.976815   20618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46379
	I0205 02:04:44.977300   20618 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:04:44.977806   20618 main.go:141] libmachine: Using API Version  1
	I0205 02:04:44.977828   20618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:04:44.978102   20618 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:04:44.978279   20618 main.go:141] libmachine: (addons-395572) Calling .GetState
	I0205 02:04:44.979681   20618 main.go:141] libmachine: (addons-395572) Calling .DriverName
	I0205 02:04:44.979942   20618 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0205 02:04:44.979970   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHHostname
	I0205 02:04:44.982644   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:44.983047   20618 main.go:141] libmachine: (addons-395572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:87:50", ip: ""} in network mk-addons-395572: {Iface:virbr1 ExpiryTime:2025-02-05 03:04:05 +0000 UTC Type:0 Mac:52:54:00:e9:87:50 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:addons-395572 Clientid:01:52:54:00:e9:87:50}
	I0205 02:04:44.983079   20618 main.go:141] libmachine: (addons-395572) DBG | domain addons-395572 has defined IP address 192.168.39.234 and MAC address 52:54:00:e9:87:50 in network mk-addons-395572
	I0205 02:04:44.983224   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHPort
	I0205 02:04:44.983408   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHKeyPath
	I0205 02:04:44.983571   20618 main.go:141] libmachine: (addons-395572) Calling .GetSSHUsername
	I0205 02:04:44.983728   20618 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/addons-395572/id_rsa Username:docker}
	I0205 02:04:45.885208   20618 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.057373498s)
	I0205 02:04:45.885262   20618 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.050370013s)
	I0205 02:04:45.885304   20618 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.898934446s)
	I0205 02:04:45.885267   20618 main.go:141] libmachine: Making call to close driver server
	I0205 02:04:45.885328   20618 main.go:141] libmachine: (addons-395572) Calling .Close
	I0205 02:04:45.885370   20618 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.895268003s)
	I0205 02:04:45.885384   20618 main.go:141] libmachine: Making call to close driver server
	I0205 02:04:45.885309   20618 main.go:141] libmachine: Making call to close driver server
	I0205 02:04:45.885403   20618 main.go:141] libmachine: (addons-395572) Calling .Close
	I0205 02:04:45.885426   20618 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.879454094s)
	I0205 02:04:45.885406   20618 main.go:141] libmachine: (addons-395572) Calling .Close
	I0205 02:04:45.885468   20618 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.861119489s)
	I0205 02:04:45.885391   20618 main.go:141] libmachine: Making call to close driver server
	I0205 02:04:45.885484   20618 main.go:141] libmachine: Making call to close driver server
	I0205 02:04:45.885489   20618 main.go:141] libmachine: (addons-395572) Calling .Close
	I0205 02:04:45.885452   20618 main.go:141] libmachine: Making call to close driver server
	I0205 02:04:45.885501   20618 main.go:141] libmachine: (addons-395572) Calling .Close
	I0205 02:04:45.885491   20618 main.go:141] libmachine: (addons-395572) Calling .Close
	I0205 02:04:45.885541   20618 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.860135979s)
	I0205 02:04:45.885565   20618 main.go:141] libmachine: Making call to close driver server
	I0205 02:04:45.885567   20618 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (7.850184752s)
	I0205 02:04:45.885574   20618 main.go:141] libmachine: (addons-395572) Calling .Close
	I0205 02:04:45.885581   20618 main.go:141] libmachine: Making call to close driver server
	I0205 02:04:45.885589   20618 main.go:141] libmachine: (addons-395572) Calling .Close
	I0205 02:04:45.885629   20618 main.go:141] libmachine: Successfully made call to close driver server
	I0205 02:04:45.885638   20618 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 02:04:45.885646   20618 main.go:141] libmachine: Making call to close driver server
	I0205 02:04:45.885648   20618 main.go:141] libmachine: Successfully made call to close driver server
	I0205 02:04:45.885668   20618 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 02:04:45.885680   20618 main.go:141] libmachine: Making call to close driver server
	I0205 02:04:45.885689   20618 main.go:141] libmachine: (addons-395572) Calling .Close
	I0205 02:04:45.885692   20618 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.743493125s)
	I0205 02:04:45.885709   20618 main.go:141] libmachine: Making call to close driver server
	I0205 02:04:45.885718   20618 main.go:141] libmachine: (addons-395572) Calling .Close
	I0205 02:04:45.885653   20618 main.go:141] libmachine: (addons-395572) Calling .Close
	I0205 02:04:45.885787   20618 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.50611837s)
	I0205 02:04:45.885802   20618 main.go:141] libmachine: (addons-395572) DBG | Closing plugin on server side
	I0205 02:04:45.885804   20618 main.go:141] libmachine: Making call to close driver server
	I0205 02:04:45.885821   20618 main.go:141] libmachine: (addons-395572) Calling .Close
	I0205 02:04:45.885827   20618 main.go:141] libmachine: Successfully made call to close driver server
	I0205 02:04:45.885842   20618 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 02:04:45.885852   20618 main.go:141] libmachine: Making call to close driver server
	I0205 02:04:45.885860   20618 main.go:141] libmachine: (addons-395572) Calling .Close
	I0205 02:04:45.885881   20618 main.go:141] libmachine: (addons-395572) DBG | Closing plugin on server side
	I0205 02:04:45.885880   20618 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.394388573s)
	I0205 02:04:45.885902   20618 main.go:141] libmachine: Making call to close driver server
	I0205 02:04:45.885908   20618 main.go:141] libmachine: (addons-395572) Calling .Close
	I0205 02:04:45.885947   20618 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.373561168s)
	I0205 02:04:45.885966   20618 main.go:141] libmachine: Making call to close driver server
	I0205 02:04:45.885977   20618 main.go:141] libmachine: (addons-395572) Calling .Close
	I0205 02:04:45.885982   20618 main.go:141] libmachine: (addons-395572) DBG | Closing plugin on server side
	I0205 02:04:45.886002   20618 main.go:141] libmachine: Successfully made call to close driver server
	I0205 02:04:45.886009   20618 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 02:04:45.886016   20618 main.go:141] libmachine: Making call to close driver server
	I0205 02:04:45.886021   20618 main.go:141] libmachine: (addons-395572) Calling .Close
	I0205 02:04:45.886067   20618 main.go:141] libmachine: (addons-395572) DBG | Closing plugin on server side
	I0205 02:04:45.886086   20618 main.go:141] libmachine: Successfully made call to close driver server
	I0205 02:04:45.886093   20618 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 02:04:45.886099   20618 main.go:141] libmachine: Making call to close driver server
	I0205 02:04:45.886105   20618 main.go:141] libmachine: (addons-395572) Calling .Close
	I0205 02:04:45.886149   20618 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.195076142s)
	W0205 02:04:45.886181   20618 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0205 02:04:45.886188   20618 main.go:141] libmachine: (addons-395572) DBG | Closing plugin on server side
	I0205 02:04:45.886223   20618 main.go:141] libmachine: Successfully made call to close driver server
	I0205 02:04:45.886233   20618 retry.go:31] will retry after 290.661812ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0205 02:04:45.886240   20618 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 02:04:45.886254   20618 main.go:141] libmachine: (addons-395572) DBG | Closing plugin on server side
	I0205 02:04:45.886271   20618 main.go:141] libmachine: (addons-395572) DBG | Closing plugin on server side
	I0205 02:04:45.886294   20618 main.go:141] libmachine: Successfully made call to close driver server
	I0205 02:04:45.886306   20618 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 02:04:45.886313   20618 main.go:141] libmachine: Making call to close driver server
	I0205 02:04:45.886325   20618 main.go:141] libmachine: (addons-395572) Calling .Close
	I0205 02:04:45.886376   20618 main.go:141] libmachine: Successfully made call to close driver server
	I0205 02:04:45.886384   20618 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 02:04:45.886392   20618 addons.go:479] Verifying addon ingress=true in "addons-395572"
	I0205 02:04:45.887610   20618 main.go:141] libmachine: Successfully made call to close driver server
	I0205 02:04:45.887623   20618 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 02:04:45.887631   20618 main.go:141] libmachine: Making call to close driver server
	I0205 02:04:45.887638   20618 main.go:141] libmachine: (addons-395572) Calling .Close
	I0205 02:04:45.888102   20618 main.go:141] libmachine: (addons-395572) DBG | Closing plugin on server side
	I0205 02:04:45.888121   20618 main.go:141] libmachine: (addons-395572) DBG | Closing plugin on server side
	I0205 02:04:45.888147   20618 main.go:141] libmachine: Successfully made call to close driver server
	I0205 02:04:45.888153   20618 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 02:04:45.888159   20618 main.go:141] libmachine: Making call to close driver server
	I0205 02:04:45.888165   20618 main.go:141] libmachine: (addons-395572) Calling .Close
	I0205 02:04:45.888212   20618 main.go:141] libmachine: (addons-395572) DBG | Closing plugin on server side
	I0205 02:04:45.888229   20618 main.go:141] libmachine: Successfully made call to close driver server
	I0205 02:04:45.888231   20618 main.go:141] libmachine: (addons-395572) DBG | Closing plugin on server side
	I0205 02:04:45.888236   20618 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 02:04:45.888259   20618 main.go:141] libmachine: Successfully made call to close driver server
	I0205 02:04:45.888267   20618 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 02:04:45.888274   20618 main.go:141] libmachine: Making call to close driver server
	I0205 02:04:45.888282   20618 main.go:141] libmachine: (addons-395572) Calling .Close
	I0205 02:04:45.888932   20618 main.go:141] libmachine: (addons-395572) DBG | Closing plugin on server side
	I0205 02:04:45.888957   20618 main.go:141] libmachine: Successfully made call to close driver server
	I0205 02:04:45.888963   20618 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 02:04:45.890425   20618 main.go:141] libmachine: (addons-395572) DBG | Closing plugin on server side
	I0205 02:04:45.890442   20618 main.go:141] libmachine: (addons-395572) DBG | Closing plugin on server side
	I0205 02:04:45.890452   20618 main.go:141] libmachine: Successfully made call to close driver server
	I0205 02:04:45.890459   20618 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 02:04:45.890465   20618 main.go:141] libmachine: Successfully made call to close driver server
	I0205 02:04:45.890470   20618 main.go:141] libmachine: Making call to close driver server
	I0205 02:04:45.890473   20618 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 02:04:45.890481   20618 addons.go:479] Verifying addon registry=true in "addons-395572"
	I0205 02:04:45.890487   20618 main.go:141] libmachine: (addons-395572) Calling .Close
	I0205 02:04:45.890612   20618 main.go:141] libmachine: Successfully made call to close driver server
	I0205 02:04:45.890821   20618 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 02:04:45.890648   20618 main.go:141] libmachine: (addons-395572) DBG | Closing plugin on server side
	I0205 02:04:45.890672   20618 main.go:141] libmachine: Successfully made call to close driver server
	I0205 02:04:45.890873   20618 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 02:04:45.890691   20618 main.go:141] libmachine: (addons-395572) DBG | Closing plugin on server side
	I0205 02:04:45.890709   20618 main.go:141] libmachine: Successfully made call to close driver server
	I0205 02:04:45.891057   20618 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 02:04:45.891065   20618 main.go:141] libmachine: Making call to close driver server
	I0205 02:04:45.891073   20618 main.go:141] libmachine: (addons-395572) Calling .Close
	I0205 02:04:45.891185   20618 main.go:141] libmachine: Successfully made call to close driver server
	I0205 02:04:45.891192   20618 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 02:04:45.891401   20618 main.go:141] libmachine: (addons-395572) DBG | Closing plugin on server side
	I0205 02:04:45.891432   20618 main.go:141] libmachine: Successfully made call to close driver server
	I0205 02:04:45.891439   20618 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 02:04:45.891442   20618 main.go:141] libmachine: (addons-395572) DBG | Closing plugin on server side
	I0205 02:04:45.891452   20618 main.go:141] libmachine: Making call to close driver server
	I0205 02:04:45.891459   20618 main.go:141] libmachine: (addons-395572) Calling .Close
	I0205 02:04:45.891482   20618 main.go:141] libmachine: Successfully made call to close driver server
	I0205 02:04:45.891491   20618 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 02:04:45.891880   20618 main.go:141] libmachine: (addons-395572) DBG | Closing plugin on server side
	I0205 02:04:45.892010   20618 main.go:141] libmachine: Successfully made call to close driver server
	I0205 02:04:45.892020   20618 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 02:04:45.892029   20618 addons.go:479] Verifying addon metrics-server=true in "addons-395572"
	I0205 02:04:45.892116   20618 main.go:141] libmachine: (addons-395572) DBG | Closing plugin on server side
	I0205 02:04:45.892146   20618 main.go:141] libmachine: Successfully made call to close driver server
	I0205 02:04:45.892153   20618 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 02:04:45.892557   20618 main.go:141] libmachine: (addons-395572) DBG | Closing plugin on server side
	I0205 02:04:45.892595   20618 main.go:141] libmachine: Successfully made call to close driver server
	I0205 02:04:45.892607   20618 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 02:04:45.893826   20618 out.go:177] * Verifying ingress addon...
	I0205 02:04:45.894757   20618 out.go:177] * Verifying registry addon...
	I0205 02:04:45.895596   20618 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-395572 service yakd-dashboard -n yakd-dashboard
	
	I0205 02:04:45.896499   20618 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0205 02:04:45.897082   20618 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0205 02:04:45.913613   20618 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0205 02:04:45.913645   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:45.913621   20618 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0205 02:04:45.913664   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:45.925730   20618 main.go:141] libmachine: Making call to close driver server
	I0205 02:04:45.925755   20618 main.go:141] libmachine: (addons-395572) Calling .Close
	I0205 02:04:45.926062   20618 main.go:141] libmachine: Successfully made call to close driver server
	I0205 02:04:45.926077   20618 main.go:141] libmachine: Making call to close connection to plugin binary
	W0205 02:04:45.926161   20618 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0205 02:04:45.931151   20618 main.go:141] libmachine: Making call to close driver server
	I0205 02:04:45.931173   20618 main.go:141] libmachine: (addons-395572) Calling .Close
	I0205 02:04:45.931413   20618 main.go:141] libmachine: (addons-395572) DBG | Closing plugin on server side
	I0205 02:04:45.931452   20618 main.go:141] libmachine: Successfully made call to close driver server
	I0205 02:04:45.931460   20618 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 02:04:46.178058   20618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0205 02:04:46.408905   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:46.414227   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:46.769022   20618 pod_ready.go:93] pod "coredns-668d6bf9bc-c4gpv" in "kube-system" namespace has status "Ready":"True"
	I0205 02:04:46.769045   20618 pod_ready.go:82] duration metric: took 9.03810146s for pod "coredns-668d6bf9bc-c4gpv" in "kube-system" namespace to be "Ready" ...
	I0205 02:04:46.769055   20618 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-ntzb5" in "kube-system" namespace to be "Ready" ...
	I0205 02:04:46.781904   20618 pod_ready.go:93] pod "coredns-668d6bf9bc-ntzb5" in "kube-system" namespace has status "Ready":"True"
	I0205 02:04:46.781926   20618 pod_ready.go:82] duration metric: took 12.864237ms for pod "coredns-668d6bf9bc-ntzb5" in "kube-system" namespace to be "Ready" ...
	I0205 02:04:46.781936   20618 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-395572" in "kube-system" namespace to be "Ready" ...
	I0205 02:04:46.802907   20618 pod_ready.go:93] pod "etcd-addons-395572" in "kube-system" namespace has status "Ready":"True"
	I0205 02:04:46.802927   20618 pod_ready.go:82] duration metric: took 20.985697ms for pod "etcd-addons-395572" in "kube-system" namespace to be "Ready" ...
	I0205 02:04:46.802937   20618 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-395572" in "kube-system" namespace to be "Ready" ...
	I0205 02:04:46.912889   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:46.915121   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:47.073775   20618 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.383000593s)
	I0205 02:04:47.073830   20618 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.093862404s)
	I0205 02:04:47.073833   20618 main.go:141] libmachine: Making call to close driver server
	I0205 02:04:47.073984   20618 main.go:141] libmachine: (addons-395572) Calling .Close
	I0205 02:04:47.074263   20618 main.go:141] libmachine: (addons-395572) DBG | Closing plugin on server side
	I0205 02:04:47.074287   20618 main.go:141] libmachine: Successfully made call to close driver server
	I0205 02:04:47.074302   20618 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 02:04:47.074322   20618 main.go:141] libmachine: Making call to close driver server
	I0205 02:04:47.074329   20618 main.go:141] libmachine: (addons-395572) Calling .Close
	I0205 02:04:47.074620   20618 main.go:141] libmachine: Successfully made call to close driver server
	I0205 02:04:47.074648   20618 main.go:141] libmachine: (addons-395572) DBG | Closing plugin on server side
	I0205 02:04:47.074658   20618 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 02:04:47.074678   20618 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-395572"
	I0205 02:04:47.075460   20618 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0205 02:04:47.076377   20618 out.go:177] * Verifying csi-hostpath-driver addon...
	I0205 02:04:47.077946   20618 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0205 02:04:47.078573   20618 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0205 02:04:47.079090   20618 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0205 02:04:47.079114   20618 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0205 02:04:47.092617   20618 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0205 02:04:47.092636   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:47.223882   20618 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0205 02:04:47.223911   20618 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0205 02:04:47.309662   20618 pod_ready.go:93] pod "kube-apiserver-addons-395572" in "kube-system" namespace has status "Ready":"True"
	I0205 02:04:47.309687   20618 pod_ready.go:82] duration metric: took 506.743733ms for pod "kube-apiserver-addons-395572" in "kube-system" namespace to be "Ready" ...
	I0205 02:04:47.309700   20618 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-395572" in "kube-system" namespace to be "Ready" ...
	I0205 02:04:47.314279   20618 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0205 02:04:47.314299   20618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0205 02:04:47.320095   20618 pod_ready.go:93] pod "kube-controller-manager-addons-395572" in "kube-system" namespace has status "Ready":"True"
	I0205 02:04:47.320114   20618 pod_ready.go:82] duration metric: took 10.406786ms for pod "kube-controller-manager-addons-395572" in "kube-system" namespace to be "Ready" ...
	I0205 02:04:47.320124   20618 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wmv2h" in "kube-system" namespace to be "Ready" ...
	I0205 02:04:47.405061   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:47.405528   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:47.452436   20618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0205 02:04:47.581367   20618 pod_ready.go:93] pod "kube-proxy-wmv2h" in "kube-system" namespace has status "Ready":"True"
	I0205 02:04:47.581391   20618 pod_ready.go:82] duration metric: took 261.262014ms for pod "kube-proxy-wmv2h" in "kube-system" namespace to be "Ready" ...
	I0205 02:04:47.581401   20618 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-395572" in "kube-system" namespace to be "Ready" ...
	I0205 02:04:47.584138   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:47.900500   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:47.900520   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:47.939385   20618 pod_ready.go:93] pod "kube-scheduler-addons-395572" in "kube-system" namespace has status "Ready":"True"
	I0205 02:04:47.939413   20618 pod_ready.go:82] duration metric: took 358.004466ms for pod "kube-scheduler-addons-395572" in "kube-system" namespace to be "Ready" ...
	I0205 02:04:47.939424   20618 pod_ready.go:39] duration metric: took 10.214940285s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0205 02:04:47.939456   20618 api_server.go:52] waiting for apiserver process to appear ...
	I0205 02:04:47.939511   20618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 02:04:48.086916   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:48.401010   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:48.401100   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:48.582205   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:48.913932   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:48.914002   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:48.955690   20618 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.777580131s)
	I0205 02:04:48.955741   20618 main.go:141] libmachine: Making call to close driver server
	I0205 02:04:48.955757   20618 main.go:141] libmachine: (addons-395572) Calling .Close
	I0205 02:04:48.956092   20618 main.go:141] libmachine: Successfully made call to close driver server
	I0205 02:04:48.956113   20618 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 02:04:48.956123   20618 main.go:141] libmachine: Making call to close driver server
	I0205 02:04:48.956129   20618 main.go:141] libmachine: (addons-395572) DBG | Closing plugin on server side
	I0205 02:04:48.956132   20618 main.go:141] libmachine: (addons-395572) Calling .Close
	I0205 02:04:48.956529   20618 main.go:141] libmachine: Successfully made call to close driver server
	I0205 02:04:48.956544   20618 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 02:04:49.068029   20618 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.615545171s)
	I0205 02:04:49.068078   20618 main.go:141] libmachine: Making call to close driver server
	I0205 02:04:49.068083   20618 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.128551346s)
	I0205 02:04:49.068115   20618 api_server.go:72] duration metric: took 11.882842985s to wait for apiserver process to appear ...
	I0205 02:04:49.068127   20618 api_server.go:88] waiting for apiserver healthz status ...
	I0205 02:04:49.068150   20618 api_server.go:253] Checking apiserver healthz at https://192.168.39.234:8443/healthz ...
	I0205 02:04:49.068091   20618 main.go:141] libmachine: (addons-395572) Calling .Close
	I0205 02:04:49.068495   20618 main.go:141] libmachine: (addons-395572) DBG | Closing plugin on server side
	I0205 02:04:49.068541   20618 main.go:141] libmachine: Successfully made call to close driver server
	I0205 02:04:49.068554   20618 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 02:04:49.068568   20618 main.go:141] libmachine: Making call to close driver server
	I0205 02:04:49.068577   20618 main.go:141] libmachine: (addons-395572) Calling .Close
	I0205 02:04:49.068794   20618 main.go:141] libmachine: Successfully made call to close driver server
	I0205 02:04:49.068814   20618 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 02:04:49.069667   20618 addons.go:479] Verifying addon gcp-auth=true in "addons-395572"
	I0205 02:04:49.071423   20618 out.go:177] * Verifying gcp-auth addon...
	I0205 02:04:49.073126   20618 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0205 02:04:49.089446   20618 api_server.go:279] https://192.168.39.234:8443/healthz returned 200:
	ok
	I0205 02:04:49.091516   20618 api_server.go:141] control plane version: v1.32.1
	I0205 02:04:49.091539   20618 api_server.go:131] duration metric: took 23.405566ms to wait for apiserver health ...
	I0205 02:04:49.091547   20618 system_pods.go:43] waiting for kube-system pods to appear ...
	I0205 02:04:49.101773   20618 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0205 02:04:49.101809   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:49.102511   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:49.102981   20618 system_pods.go:59] 19 kube-system pods found
	I0205 02:04:49.103009   20618 system_pods.go:61] "amd-gpu-device-plugin-v2g4x" [f4c9493c-8fa3-44c5-bff3-3a4e156a1233] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0205 02:04:49.103019   20618 system_pods.go:61] "coredns-668d6bf9bc-c4gpv" [55d96cfe-50b0-4fe1-8739-e44f0225d19b] Running
	I0205 02:04:49.103027   20618 system_pods.go:61] "coredns-668d6bf9bc-ntzb5" [a838af29-6255-4892-9072-e805c99bcbc4] Running
	I0205 02:04:49.103035   20618 system_pods.go:61] "csi-hostpath-attacher-0" [008c0274-352d-4f18-a798-24149ea2d7dc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0205 02:04:49.103042   20618 system_pods.go:61] "csi-hostpath-resizer-0" [63832836-7bed-4292-b28c-3bd2c4682aae] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0205 02:04:49.103055   20618 system_pods.go:61] "csi-hostpathplugin-nw2vn" [f6374d3e-ea9b-4862-aa55-5619fd62c262] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0205 02:04:49.103075   20618 system_pods.go:61] "etcd-addons-395572" [c9db7a76-6aee-48f9-ace9-a9b5bfe48f17] Running
	I0205 02:04:49.103079   20618 system_pods.go:61] "kube-apiserver-addons-395572" [c46e892f-8e39-43b5-a613-f9f77296a248] Running
	I0205 02:04:49.103082   20618 system_pods.go:61] "kube-controller-manager-addons-395572" [de63c3b7-306d-4fab-aeb9-bbae844a1b49] Running
	I0205 02:04:49.103087   20618 system_pods.go:61] "kube-ingress-dns-minikube" [0453f72c-00bd-4d62-99f9-7d6837d37e34] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0205 02:04:49.103093   20618 system_pods.go:61] "kube-proxy-wmv2h" [ada9f815-81f2-497a-b88b-78fa4996eda6] Running
	I0205 02:04:49.103096   20618 system_pods.go:61] "kube-scheduler-addons-395572" [e8c9ceda-b88a-4651-a2b9-bebc86811547] Running
	I0205 02:04:49.103101   20618 system_pods.go:61] "metrics-server-7fbb699795-62dtn" [7ed1c285-e119-4991-b562-b48bc209460b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0205 02:04:49.103109   20618 system_pods.go:61] "nvidia-device-plugin-daemonset-2pc2d" [e2d6cd73-98b6-4f84-a95f-df50eed11a24] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0205 02:04:49.103114   20618 system_pods.go:61] "registry-6c88467877-z8t9s" [00899170-2971-4aba-8699-bd3bc4501a36] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0205 02:04:49.103121   20618 system_pods.go:61] "registry-proxy-4hkv4" [5b4564c0-6a17-4a35-b52f-f28e1c4622a1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0205 02:04:49.103126   20618 system_pods.go:61] "snapshot-controller-68b874b76f-5vksf" [6dd805cb-3ca9-4d3e-8464-1ed77ccd239a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0205 02:04:49.103133   20618 system_pods.go:61] "snapshot-controller-68b874b76f-5xsqv" [ccd517a5-37f4-46e8-8be2-b3e14d268b2b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0205 02:04:49.103136   20618 system_pods.go:61] "storage-provisioner" [3184710f-e171-4100-8b78-37f5590c9b16] Running
	I0205 02:04:49.103142   20618 system_pods.go:74] duration metric: took 11.590296ms to wait for pod list to return data ...
	I0205 02:04:49.103153   20618 default_sa.go:34] waiting for default service account to be created ...
	I0205 02:04:49.127038   20618 default_sa.go:45] found service account: "default"
	I0205 02:04:49.127063   20618 default_sa.go:55] duration metric: took 23.905018ms for default service account to be created ...
	I0205 02:04:49.127074   20618 system_pods.go:116] waiting for k8s-apps to be running ...
	I0205 02:04:49.193634   20618 system_pods.go:86] 19 kube-system pods found
	I0205 02:04:49.193666   20618 system_pods.go:89] "amd-gpu-device-plugin-v2g4x" [f4c9493c-8fa3-44c5-bff3-3a4e156a1233] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0205 02:04:49.193674   20618 system_pods.go:89] "coredns-668d6bf9bc-c4gpv" [55d96cfe-50b0-4fe1-8739-e44f0225d19b] Running
	I0205 02:04:49.193680   20618 system_pods.go:89] "coredns-668d6bf9bc-ntzb5" [a838af29-6255-4892-9072-e805c99bcbc4] Running
	I0205 02:04:49.193685   20618 system_pods.go:89] "csi-hostpath-attacher-0" [008c0274-352d-4f18-a798-24149ea2d7dc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0205 02:04:49.193691   20618 system_pods.go:89] "csi-hostpath-resizer-0" [63832836-7bed-4292-b28c-3bd2c4682aae] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0205 02:04:49.193697   20618 system_pods.go:89] "csi-hostpathplugin-nw2vn" [f6374d3e-ea9b-4862-aa55-5619fd62c262] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0205 02:04:49.193701   20618 system_pods.go:89] "etcd-addons-395572" [c9db7a76-6aee-48f9-ace9-a9b5bfe48f17] Running
	I0205 02:04:49.193704   20618 system_pods.go:89] "kube-apiserver-addons-395572" [c46e892f-8e39-43b5-a613-f9f77296a248] Running
	I0205 02:04:49.193708   20618 system_pods.go:89] "kube-controller-manager-addons-395572" [de63c3b7-306d-4fab-aeb9-bbae844a1b49] Running
	I0205 02:04:49.193713   20618 system_pods.go:89] "kube-ingress-dns-minikube" [0453f72c-00bd-4d62-99f9-7d6837d37e34] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0205 02:04:49.193716   20618 system_pods.go:89] "kube-proxy-wmv2h" [ada9f815-81f2-497a-b88b-78fa4996eda6] Running
	I0205 02:04:49.193719   20618 system_pods.go:89] "kube-scheduler-addons-395572" [e8c9ceda-b88a-4651-a2b9-bebc86811547] Running
	I0205 02:04:49.193723   20618 system_pods.go:89] "metrics-server-7fbb699795-62dtn" [7ed1c285-e119-4991-b562-b48bc209460b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0205 02:04:49.193728   20618 system_pods.go:89] "nvidia-device-plugin-daemonset-2pc2d" [e2d6cd73-98b6-4f84-a95f-df50eed11a24] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0205 02:04:49.193733   20618 system_pods.go:89] "registry-6c88467877-z8t9s" [00899170-2971-4aba-8699-bd3bc4501a36] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0205 02:04:49.193737   20618 system_pods.go:89] "registry-proxy-4hkv4" [5b4564c0-6a17-4a35-b52f-f28e1c4622a1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0205 02:04:49.193746   20618 system_pods.go:89] "snapshot-controller-68b874b76f-5vksf" [6dd805cb-3ca9-4d3e-8464-1ed77ccd239a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0205 02:04:49.193754   20618 system_pods.go:89] "snapshot-controller-68b874b76f-5xsqv" [ccd517a5-37f4-46e8-8be2-b3e14d268b2b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0205 02:04:49.193758   20618 system_pods.go:89] "storage-provisioner" [3184710f-e171-4100-8b78-37f5590c9b16] Running
	I0205 02:04:49.193765   20618 system_pods.go:126] duration metric: took 66.68613ms to wait for k8s-apps to be running ...
	I0205 02:04:49.193773   20618 system_svc.go:44] waiting for kubelet service to be running ....
	I0205 02:04:49.193813   20618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0205 02:04:49.258246   20618 system_svc.go:56] duration metric: took 64.465906ms WaitForService to wait for kubelet
	I0205 02:04:49.258276   20618 kubeadm.go:582] duration metric: took 12.073004465s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0205 02:04:49.258292   20618 node_conditions.go:102] verifying NodePressure condition ...
	I0205 02:04:49.263283   20618 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0205 02:04:49.263310   20618 node_conditions.go:123] node cpu capacity is 2
	I0205 02:04:49.263322   20618 node_conditions.go:105] duration metric: took 5.026347ms to run NodePressure ...
	I0205 02:04:49.263333   20618 start.go:241] waiting for startup goroutines ...
	I0205 02:04:49.401607   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:49.401691   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:49.576946   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:49.582255   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:49.904345   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:49.904645   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:50.077547   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:50.083396   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:50.399767   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:50.401173   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:50.576024   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:50.581674   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:50.899961   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:50.900454   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:51.076605   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:51.081878   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:51.400706   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:51.400870   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:51.576875   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:51.581363   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:51.900601   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:51.900726   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:52.076796   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:52.081155   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:52.400527   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:52.401205   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:52.575819   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:52.581203   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:52.939512   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:52.939603   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:53.077141   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:53.081615   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:53.399242   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:53.400045   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:53.576204   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:53.582261   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:53.902945   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:53.903231   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:54.076616   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:54.081359   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:54.400627   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:54.400845   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:54.576414   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:54.581727   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:54.899558   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:54.899714   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:55.077825   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:55.084082   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:55.401145   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:55.401237   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:55.575634   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:55.581809   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:55.900136   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:55.900948   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:56.076496   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:56.081923   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:56.400642   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:56.400823   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:56.576893   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:56.581607   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:56.899952   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:56.901041   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:57.109207   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:57.109835   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:57.400323   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:57.400713   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:57.576219   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:57.582348   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:57.901185   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:57.901439   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:58.076739   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:58.081552   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:58.400560   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:58.400649   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:58.576524   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:58.582154   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:58.899800   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:58.900839   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:59.076364   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:59.082030   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:59.463640   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:59.463941   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:59.576498   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:59.582403   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:59.900128   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:59.900229   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:00.077239   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:00.081680   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:00.399358   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:00.400222   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:00.575683   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:00.580761   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:00.900365   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:00.900398   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:01.076241   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:01.081876   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:01.406770   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:01.406793   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:01.694130   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:01.694374   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:01.901426   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:01.901643   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:02.077027   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:02.081640   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:02.399633   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:02.400184   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:02.575824   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:02.581491   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:02.900658   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:02.900814   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:03.076657   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:03.081252   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:03.401276   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:03.401984   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:03.576976   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:03.581865   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:04.099873   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:04.100056   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:04.100214   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:04.100412   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:04.399498   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:04.400563   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:04.576564   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:04.582088   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:04.900914   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:04.901010   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:05.076285   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:05.081892   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:05.400486   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:05.401259   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:05.576248   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:05.581816   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:05.901054   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:05.901062   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:06.076079   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:06.081729   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:06.400918   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:06.401045   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:06.576533   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:06.582752   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:06.899697   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:06.899738   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:07.076316   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:07.081489   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:07.400683   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:07.400875   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:07.576641   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:07.581044   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:07.909988   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:07.910563   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:08.076521   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:08.081717   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:08.442061   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:08.442930   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:08.576872   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:08.581579   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:08.901076   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:08.901302   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:09.076508   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:09.082019   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:09.403115   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:09.403271   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:09.576110   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:09.582369   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:09.900160   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:09.900163   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:10.076516   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:10.082269   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:10.400949   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:10.401201   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:10.576042   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:10.582101   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:10.900219   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:10.900746   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:11.076922   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:11.081124   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:11.400928   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:11.401095   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:11.576950   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:11.581772   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:11.900621   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:11.900762   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:12.076547   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:12.082164   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:12.399800   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:12.400863   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:12.577109   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:12.581447   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:12.900900   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:12.901058   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:13.076661   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:13.080827   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:13.638583   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:13.638693   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:13.638755   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:13.638783   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:13.901316   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:13.901478   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:14.076341   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:14.081706   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:14.400729   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:14.401148   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:14.576706   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:14.581405   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:14.900558   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:14.900783   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:15.076287   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:15.081858   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:15.400311   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:15.400359   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:15.580249   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:15.582738   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:15.900611   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:15.901411   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:16.076188   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:16.081819   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:16.399949   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:16.400457   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:16.576861   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:16.581608   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:16.899752   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:16.900355   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:17.076044   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:17.081783   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:17.401106   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:17.401393   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:17.577490   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:17.582519   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:17.900020   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:17.900679   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:18.076476   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:18.082600   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:18.399464   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:18.401005   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:18.577058   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:18.582054   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:18.901671   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:18.901754   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:19.076953   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:19.081653   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:19.400894   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:19.400912   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:19.577142   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:19.581651   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:19.900561   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:19.900592   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:20.078836   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:20.178916   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:20.400283   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:20.400386   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:20.577195   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:20.581980   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:20.899626   20618 kapi.go:107] duration metric: took 35.002538715s to wait for kubernetes.io/minikube-addons=registry ...
	I0205 02:05:20.899716   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:21.078861   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:21.080745   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:21.399693   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:21.580806   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:21.582268   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:21.900138   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:22.075854   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:22.081616   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:22.400151   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:22.576095   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:22.581581   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:22.899330   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:23.075831   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:23.081673   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:23.399672   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:23.576684   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:23.580961   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:23.901599   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:24.076631   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:24.082072   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:24.400021   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:24.576688   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:24.580818   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:24.900296   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:25.076443   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:25.081857   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:25.399651   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:25.576617   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:25.582933   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:25.900108   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:26.077319   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:26.081868   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:26.723357   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:26.723618   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:26.723851   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:26.899801   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:27.076479   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:27.082004   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:27.400906   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:27.583018   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:27.583029   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:27.900338   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:28.075894   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:28.081770   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:28.400278   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:28.575699   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:28.581515   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:28.899638   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:29.076067   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:29.081642   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:29.399564   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:29.576151   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:29.581523   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:29.900308   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:30.076403   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:30.081915   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:30.399919   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:30.579294   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:30.581304   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:30.902996   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:31.077228   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:31.082714   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:31.399339   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:31.576113   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:31.581726   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:31.899574   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:32.076062   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:32.081961   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:32.399781   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:32.576651   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:32.581633   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:33.033821   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:33.076187   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:33.081853   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:33.400977   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:33.576532   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:33.582192   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:33.900184   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:34.076223   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:34.082300   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:34.400237   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:34.576060   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:34.581884   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:34.911149   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:35.077702   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:35.080901   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:35.400520   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:35.576175   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:35.581761   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:35.899610   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:36.076547   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:36.082440   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:36.399683   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:36.576998   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:36.581360   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:36.900964   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:37.077408   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:37.082651   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:37.399588   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:37.576333   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:37.581954   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:37.899970   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:38.076572   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:38.082326   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:38.400591   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:38.576260   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:38.582581   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:38.901647   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:39.083409   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:39.083614   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:39.399751   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:39.576449   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:39.582230   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:39.899470   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:40.076666   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:40.080924   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:40.400471   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:40.584283   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:40.584641   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:40.906938   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:41.078114   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:41.081546   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:41.399968   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:41.576525   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:41.581877   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:41.901861   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:42.076532   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:42.081855   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:42.400111   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:42.576854   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:42.582565   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:42.899346   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:43.077806   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:43.081365   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:43.400802   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:43.576823   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:43.581439   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:43.901966   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:44.077251   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:44.178600   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:44.399779   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:44.576434   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:44.582075   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:44.900404   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:45.076753   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:45.081936   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:45.407921   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:45.577061   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:45.581334   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:45.901919   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:46.076797   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:46.081424   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:46.770726   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:46.771342   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:46.772273   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:46.900878   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:47.082965   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:47.085462   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:47.399903   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:47.576397   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:47.582501   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:47.900071   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:48.076568   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:48.083141   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:48.401568   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:48.579253   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:48.582356   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:48.899781   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:49.076397   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:49.082898   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:49.401463   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:49.578012   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:49.585723   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:49.900436   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:50.077504   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:50.082161   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:50.400508   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:50.576043   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:50.581453   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:50.900292   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:51.076189   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:51.081620   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:51.403483   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:51.576409   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:51.581987   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:51.900114   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:52.077688   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:52.081400   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:52.400165   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:52.575971   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:52.581589   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:52.900898   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:53.077208   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:53.082146   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:53.401165   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:53.576904   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:53.582139   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:53.900899   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:54.077080   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:54.086948   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:54.608173   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:54.608489   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:54.608693   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:54.900031   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:55.076086   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:55.081369   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:55.400415   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:55.575632   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:55.582023   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:55.902474   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:56.077406   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:56.083499   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:56.619766   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:56.620015   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:56.625525   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:56.900234   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:57.076053   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:57.081662   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:57.399887   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:57.576463   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:57.581992   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:57.900611   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:58.079737   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:58.086446   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:58.406790   20618 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:58.577721   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:58.581842   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:58.900096   20618 kapi.go:107] duration metric: took 1m13.003594958s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0205 02:05:59.075679   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:59.081273   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:06:00.020200   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:06:00.082956   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:06:00.096157   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:06:00.098781   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:06:00.577225   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:06:00.582075   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:06:01.076489   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:06:01.082925   20618 kapi.go:107] duration metric: took 1m14.004346277s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0205 02:06:01.576231   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:06:02.076491   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:06:02.576168   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:06:03.078903   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:06:03.576641   20618 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:06:04.076324   20618 kapi.go:107] duration metric: took 1m15.003192699s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0205 02:06:04.078060   20618 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-395572 cluster.
	I0205 02:06:04.079191   20618 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0205 02:06:04.080401   20618 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0205 02:06:04.081554   20618 out.go:177] * Enabled addons: ingress-dns, inspektor-gadget, storage-provisioner, nvidia-device-plugin, amd-gpu-device-plugin, metrics-server, cloud-spanner, yakd, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0205 02:06:04.082611   20618 addons.go:514] duration metric: took 1m26.897284954s for enable addons: enabled=[ingress-dns inspektor-gadget storage-provisioner nvidia-device-plugin amd-gpu-device-plugin metrics-server cloud-spanner yakd default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0205 02:06:04.082649   20618 start.go:246] waiting for cluster config update ...
	I0205 02:06:04.082669   20618 start.go:255] writing updated cluster config ...
	I0205 02:06:04.082940   20618 ssh_runner.go:195] Run: rm -f paused
	I0205 02:06:04.136084   20618 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0205 02:06:04.137732   20618 out.go:177] * Done! kubectl is now configured to use "addons-395572" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 05 02:09:12 addons-395572 crio[661]: time="2025-02-05 02:09:12.562926819Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:03cdea88218a196c5b5fb1be6c52a3551ac1d040323ead8c1ff04c59949e4df6,PodSandboxId:ee157bcca3515eaada90aa0bb3f3d1a402e2bd9461db70c08f51e1e014953037,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:93f9c72967dbcfaffe724ae5ba471e9568c9bbe67271f53266c84f3c83a409e3,State:CONTAINER_RUNNING,CreatedAt:1738721212630881515,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 597824e3-3c20-407e-b032-c884d6df1ddd,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f852e96d5a036d413b27b237ca2100a0b6f2e35b7e3a6945fe5abfb40eb351cd,PodSandboxId:911a0083a568cdbbb0bb71e93b53ae8975e21fa91ce84afc3db2cb025f6d71f7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1738721167327720426,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fdadd699-3496-4062-9839-8a5f36de0948,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d3c79d142906cf3b8b61b54c21eaa45691fad12d3fdf699062eae47e65e7ed2,PodSandboxId:b21bbc9e1487f632141d74f2818c0fef45d227728cdf4193d7bb2d312f96d159,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1738721157970376921,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-9f6qj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 109472cb-6d15-4d75-b53a-20868c9e303f,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:11def4b2bee23505aaf3a3bbc4aa3591f4833d9fad99c1fd7c2e5eaa027c52d1,PodSandboxId:c457617cb863cbad6259efa12bd44e76b08eb3c1029f5836b9a408ffc87f5a67,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1738721157882435516,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-kgpxz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e89f44eb-69f2-4ee9-91de-8b954a7d6585,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29f60b3a73818c269954864e7c26d588403ad60fe5deb73d97a0b3f3f43f25b0,PodSandboxId:aa59e24748ae75c28731a35e04822a5b1801087aa738b8a978dd04f8f00015a3,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1738721140729940031,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-ds7sg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4858f415-d99e-4fec-b657-8f228d04b580,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66992773c567e1079ad6c192333baa59e7558bb3724eacfc24576cc5f90045e2,PodSandboxId:b03fce6a9308e60751af1c68d5e3d0d720126263c5591699427bcfdfb4b6c31a,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler
:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1738721136417788423,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-76f89f99b5-9mtqz,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 5584f642-888f-4703-8c3f-e93002ebe4da,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25dcba2aff9522fa7c0f93253f8157e956d0b5d51d29d2ef864e1f6cbcab92ef,PodSandboxId:9951417c1b48f463114918ad66e8499b2f21f051b48672b26adb35dd707d53a4,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annota
tions:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1738721108876392220,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-v2g4x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4c9493c-8fa3-44c5-bff3-3a4e156a1233,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f0f89e2c16b2a8059dfea09f984e2dacd06cb56ca30af914ccb843a9798d72e,PodSandboxId:f76a50a891f615139badf5d5b33f41ca6e58149453cbfd279224d07e51c27f6c,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d46
0978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1738721093588921140,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0453f72c-00bd-4d62-99f9-7d6837d37e34,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8a9c4186af4d4f6e1cec7344e02c78ab602a37e7cbd4dc8de0c7f028756a2ab,PodSandboxId:a6751a4244a2161f74b479a68245ff3552d4e6a2fff7f84acb6b38f1e97fa4ec,Metadata:&ContainerMetad
ata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1738721083861022534,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3184710f-e171-4100-8b78-37f5590c9b16,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:729283c3ca5ded343db75b0cbc38887ad74593ce51084cc56a4224d7781521ed,PodSandboxId:24dc300a7adaef42deec540866d556e29bdf25232630e4793a8348caaf21b23d,Metadata:&ContainerMetadata{Name:cor
edns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1738721082575803711,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-c4gpv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55d96cfe-50b0-4fe1-8739-e44f0225d19b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},
},&Container{Id:32bf9f7e6fcd833721bb342cb73b11fa4967e83a01959884d6557237275d7201,PodSandboxId:3669a02541d3a6c2f48a752539b426480ea4b5f37219facd6eaff733491125d4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1738721077989534237,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wmv2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ada9f815-81f2-497a-b88b-78fa4996eda6,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4761e834f1015cc51
83539165618d806967c59a1b79c4c27a9446fe3baf6a741,PodSandboxId:a7ad047701c76f29ffc0835e925491e8fac278f451a1782e1a6bc032baaa7388,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1738721067017505677,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-395572,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edd607d9672b5bc7d360731907ef2cff,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98456be81c1f650ec250f6dea72c5ae1f6bbe435c20b37b59cb0fa3d5996a7fc
,PodSandboxId:dbfc1bf3cc3168438ec55157773af31d8375148e5c7426ac3e7cd5ac5bd19b44,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1738721067033067176,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-395572,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2340fe1b9c528391465b1cad62b880cb,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bc31a7bfadf4a970c0f8f1fa27592b68a5ef7c36a5d177ed7d37ee0aa07fa48,PodSandboxId:a14
94fa084af027108d5f9d3eea8dbbd81639dbe5c39d56d1769c485dfa7b17b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1738721067034840723,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-395572,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 042790824b2bd65942fdb01596ac21d7,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cca032c7f4f6387f2e07cf9b1593b3cf5c8966762b7a5aa46febb37f19ec4897,PodSand
boxId:f11ca2468c83523220ead7c5ead8abca51e6d9d31fc26a194a7b46416607b92e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1738721066952283528,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-395572,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 876ea4b93352a4583ac6ee98e5ebb851,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=54cad384-7ff5-4889-8c16-84f0d893a934 name=/runtime.v1.
RuntimeService/ListContainers
	Feb 05 02:09:12 addons-395572 crio[661]: time="2025-02-05 02:09:12.567967269Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:nil,LabelSelector:map[string]string{io.kubernetes.pod.uid: 5603b6ef-5c4e-4296-80d6-51a6ce2d8b6d,},},}" file="otel-collector/interceptors.go:62" id=1406e469-8a4a-4580-b9fb-a52449d79fa8 name=/runtime.v1.RuntimeService/ListPodSandbox
	Feb 05 02:09:12 addons-395572 crio[661]: time="2025-02-05 02:09:12.568360889Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:f9845d0b13c7a1a363fc155b5a1c89fc27874ce908e7663dcc4b1c0d58f9feaf,Metadata:&PodSandboxMetadata{Name:hello-world-app-7d9564db4-jkk42,Uid:5603b6ef-5c4e-4296-80d6-51a6ce2d8b6d,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1738721351629634204,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-7d9564db4-jkk42,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5603b6ef-5c4e-4296-80d6-51a6ce2d8b6d,pod-template-hash: 7d9564db4,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-02-05T02:09:11.319794174Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=1406e469-8a4a-4580-b9fb-a52449d79fa8 name=/runtime.v1.RuntimeService/ListPodSandbox
	Feb 05 02:09:12 addons-395572 crio[661]: time="2025-02-05 02:09:12.568820800Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738721352568795172,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595287,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dadccbc2-4107-4347-90ad-ef93601abc20 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 02:09:12 addons-395572 crio[661]: time="2025-02-05 02:09:12.569065899Z" level=debug msg="Request: &PodSandboxStatusRequest{PodSandboxId:f9845d0b13c7a1a363fc155b5a1c89fc27874ce908e7663dcc4b1c0d58f9feaf,Verbose:false,}" file="otel-collector/interceptors.go:62" id=e143d83d-48d5-45c4-b2bd-68d6d776fd8d name=/runtime.v1.RuntimeService/PodSandboxStatus
	Feb 05 02:09:12 addons-395572 crio[661]: time="2025-02-05 02:09:12.569297460Z" level=debug msg="Response: &PodSandboxStatusResponse{Status:&PodSandboxStatus{Id:f9845d0b13c7a1a363fc155b5a1c89fc27874ce908e7663dcc4b1c0d58f9feaf,Metadata:&PodSandboxMetadata{Name:hello-world-app-7d9564db4-jkk42,Uid:5603b6ef-5c4e-4296-80d6-51a6ce2d8b6d,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1738721351629634204,Network:&PodSandboxNetworkStatus{Ip:10.244.0.33,AdditionalIps:[]*PodIP{},},Linux:&LinuxPodSandboxStatus{Namespaces:&Namespace{Options:&NamespaceOption{Network:POD,Pid:CONTAINER,Ipc:POD,TargetId:,UsernsOptions:nil,},},},Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-7d9564db4-jkk42,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5603b6ef-5c4e-4296-80d6-51a6ce2d8b6d,pod-template-hash: 7d9564db4,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-02-05T02:09:11.319794174Z,kubernetes.io/config.source: api
,},RuntimeHandler:,},Info:map[string]string{},ContainersStatuses:[]*ContainerStatus{},Timestamp:0,}" file="otel-collector/interceptors.go:74" id=e143d83d-48d5-45c4-b2bd-68d6d776fd8d name=/runtime.v1.RuntimeService/PodSandboxStatus
	Feb 05 02:09:12 addons-395572 crio[661]: time="2025-02-05 02:09:12.570510445Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.uid: 5603b6ef-5c4e-4296-80d6-51a6ce2d8b6d,},},}" file="otel-collector/interceptors.go:62" id=1eb51bfe-6031-47d3-825a-e43e1a14520e name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 02:09:12 addons-395572 crio[661]: time="2025-02-05 02:09:12.570699135Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1eb51bfe-6031-47d3-825a-e43e1a14520e name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 02:09:12 addons-395572 crio[661]: time="2025-02-05 02:09:12.570849636Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1eb51bfe-6031-47d3-825a-e43e1a14520e name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 02:09:12 addons-395572 crio[661]: time="2025-02-05 02:09:12.589474749Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fad8cbef-b34c-4161-988e-eaf50da6a710 name=/runtime.v1.RuntimeService/Version
	Feb 05 02:09:12 addons-395572 crio[661]: time="2025-02-05 02:09:12.589548084Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fad8cbef-b34c-4161-988e-eaf50da6a710 name=/runtime.v1.RuntimeService/Version
	Feb 05 02:09:12 addons-395572 crio[661]: time="2025-02-05 02:09:12.590690766Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=397da2cd-bd85-4edd-bed9-abe0cf42de8c name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 02:09:12 addons-395572 crio[661]: time="2025-02-05 02:09:12.591791132Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738721352591770404,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595287,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=397da2cd-bd85-4edd-bed9-abe0cf42de8c name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 02:09:12 addons-395572 crio[661]: time="2025-02-05 02:09:12.592322308Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f53d1c2e-7b36-455e-95ee-bfd64134f245 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 02:09:12 addons-395572 crio[661]: time="2025-02-05 02:09:12.592382773Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f53d1c2e-7b36-455e-95ee-bfd64134f245 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 02:09:12 addons-395572 crio[661]: time="2025-02-05 02:09:12.592650195Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:03cdea88218a196c5b5fb1be6c52a3551ac1d040323ead8c1ff04c59949e4df6,PodSandboxId:ee157bcca3515eaada90aa0bb3f3d1a402e2bd9461db70c08f51e1e014953037,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:93f9c72967dbcfaffe724ae5ba471e9568c9bbe67271f53266c84f3c83a409e3,State:CONTAINER_RUNNING,CreatedAt:1738721212630881515,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 597824e3-3c20-407e-b032-c884d6df1ddd,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f852e96d5a036d413b27b237ca2100a0b6f2e35b7e3a6945fe5abfb40eb351cd,PodSandboxId:911a0083a568cdbbb0bb71e93b53ae8975e21fa91ce84afc3db2cb025f6d71f7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1738721167327720426,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fdadd699-3496-4062-9839-8a5f36de0948,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d3c79d142906cf3b8b61b54c21eaa45691fad12d3fdf699062eae47e65e7ed2,PodSandboxId:b21bbc9e1487f632141d74f2818c0fef45d227728cdf4193d7bb2d312f96d159,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1738721157970376921,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-9f6qj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 109472cb-6d15-4d75-b53a-20868c9e303f,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:11def4b2bee23505aaf3a3bbc4aa3591f4833d9fad99c1fd7c2e5eaa027c52d1,PodSandboxId:c457617cb863cbad6259efa12bd44e76b08eb3c1029f5836b9a408ffc87f5a67,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1738721157882435516,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-kgpxz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e89f44eb-69f2-4ee9-91de-8b954a7d6585,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29f60b3a73818c269954864e7c26d588403ad60fe5deb73d97a0b3f3f43f25b0,PodSandboxId:aa59e24748ae75c28731a35e04822a5b1801087aa738b8a978dd04f8f00015a3,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1738721140729940031,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-ds7sg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4858f415-d99e-4fec-b657-8f228d04b580,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66992773c567e1079ad6c192333baa59e7558bb3724eacfc24576cc5f90045e2,PodSandboxId:b03fce6a9308e60751af1c68d5e3d0d720126263c5591699427bcfdfb4b6c31a,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler
:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1738721136417788423,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-76f89f99b5-9mtqz,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 5584f642-888f-4703-8c3f-e93002ebe4da,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25dcba2aff9522fa7c0f93253f8157e956d0b5d51d29d2ef864e1f6cbcab92ef,PodSandboxId:9951417c1b48f463114918ad66e8499b2f21f051b48672b26adb35dd707d53a4,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annota
tions:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1738721108876392220,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-v2g4x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4c9493c-8fa3-44c5-bff3-3a4e156a1233,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f0f89e2c16b2a8059dfea09f984e2dacd06cb56ca30af914ccb843a9798d72e,PodSandboxId:f76a50a891f615139badf5d5b33f41ca6e58149453cbfd279224d07e51c27f6c,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d46
0978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1738721093588921140,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0453f72c-00bd-4d62-99f9-7d6837d37e34,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8a9c4186af4d4f6e1cec7344e02c78ab602a37e7cbd4dc8de0c7f028756a2ab,PodSandboxId:a6751a4244a2161f74b479a68245ff3552d4e6a2fff7f84acb6b38f1e97fa4ec,Metadata:&ContainerMetad
ata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1738721083861022534,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3184710f-e171-4100-8b78-37f5590c9b16,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:729283c3ca5ded343db75b0cbc38887ad74593ce51084cc56a4224d7781521ed,PodSandboxId:24dc300a7adaef42deec540866d556e29bdf25232630e4793a8348caaf21b23d,Metadata:&ContainerMetadata{Name:cor
edns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1738721082575803711,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-c4gpv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55d96cfe-50b0-4fe1-8739-e44f0225d19b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},
},&Container{Id:32bf9f7e6fcd833721bb342cb73b11fa4967e83a01959884d6557237275d7201,PodSandboxId:3669a02541d3a6c2f48a752539b426480ea4b5f37219facd6eaff733491125d4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1738721077989534237,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wmv2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ada9f815-81f2-497a-b88b-78fa4996eda6,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4761e834f1015cc51
83539165618d806967c59a1b79c4c27a9446fe3baf6a741,PodSandboxId:a7ad047701c76f29ffc0835e925491e8fac278f451a1782e1a6bc032baaa7388,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1738721067017505677,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-395572,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edd607d9672b5bc7d360731907ef2cff,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98456be81c1f650ec250f6dea72c5ae1f6bbe435c20b37b59cb0fa3d5996a7fc
,PodSandboxId:dbfc1bf3cc3168438ec55157773af31d8375148e5c7426ac3e7cd5ac5bd19b44,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1738721067033067176,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-395572,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2340fe1b9c528391465b1cad62b880cb,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bc31a7bfadf4a970c0f8f1fa27592b68a5ef7c36a5d177ed7d37ee0aa07fa48,PodSandboxId:a14
94fa084af027108d5f9d3eea8dbbd81639dbe5c39d56d1769c485dfa7b17b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1738721067034840723,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-395572,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 042790824b2bd65942fdb01596ac21d7,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cca032c7f4f6387f2e07cf9b1593b3cf5c8966762b7a5aa46febb37f19ec4897,PodSand
boxId:f11ca2468c83523220ead7c5ead8abca51e6d9d31fc26a194a7b46416607b92e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1738721066952283528,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-395572,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 876ea4b93352a4583ac6ee98e5ebb851,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f53d1c2e-7b36-455e-95ee-bfd64134f245 name=/runtime.v1.
RuntimeService/ListContainers
	Feb 05 02:09:12 addons-395572 crio[661]: time="2025-02-05 02:09:12.623373444Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2ec8baa7-d3c8-4037-b21e-3b3f5260cc41 name=/runtime.v1.RuntimeService/Version
	Feb 05 02:09:12 addons-395572 crio[661]: time="2025-02-05 02:09:12.623455757Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2ec8baa7-d3c8-4037-b21e-3b3f5260cc41 name=/runtime.v1.RuntimeService/Version
	Feb 05 02:09:12 addons-395572 crio[661]: time="2025-02-05 02:09:12.624456406Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a6174545-793a-45d7-8cd5-51f4145ba470 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 02:09:12 addons-395572 crio[661]: time="2025-02-05 02:09:12.625651224Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738721352625625300,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595287,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a6174545-793a-45d7-8cd5-51f4145ba470 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 02:09:12 addons-395572 crio[661]: time="2025-02-05 02:09:12.626285498Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6f3d86e3-321e-4f1d-89ce-5d43ee837a20 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 02:09:12 addons-395572 crio[661]: time="2025-02-05 02:09:12.626351198Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6f3d86e3-321e-4f1d-89ce-5d43ee837a20 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 02:09:12 addons-395572 crio[661]: time="2025-02-05 02:09:12.626716299Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:03cdea88218a196c5b5fb1be6c52a3551ac1d040323ead8c1ff04c59949e4df6,PodSandboxId:ee157bcca3515eaada90aa0bb3f3d1a402e2bd9461db70c08f51e1e014953037,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:93f9c72967dbcfaffe724ae5ba471e9568c9bbe67271f53266c84f3c83a409e3,State:CONTAINER_RUNNING,CreatedAt:1738721212630881515,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 597824e3-3c20-407e-b032-c884d6df1ddd,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f852e96d5a036d413b27b237ca2100a0b6f2e35b7e3a6945fe5abfb40eb351cd,PodSandboxId:911a0083a568cdbbb0bb71e93b53ae8975e21fa91ce84afc3db2cb025f6d71f7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1738721167327720426,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fdadd699-3496-4062-9839-8a5f36de0948,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d3c79d142906cf3b8b61b54c21eaa45691fad12d3fdf699062eae47e65e7ed2,PodSandboxId:b21bbc9e1487f632141d74f2818c0fef45d227728cdf4193d7bb2d312f96d159,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1738721157970376921,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-9f6qj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 109472cb-6d15-4d75-b53a-20868c9e303f,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:11def4b2bee23505aaf3a3bbc4aa3591f4833d9fad99c1fd7c2e5eaa027c52d1,PodSandboxId:c457617cb863cbad6259efa12bd44e76b08eb3c1029f5836b9a408ffc87f5a67,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1738721157882435516,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-kgpxz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e89f44eb-69f2-4ee9-91de-8b954a7d6585,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29f60b3a73818c269954864e7c26d588403ad60fe5deb73d97a0b3f3f43f25b0,PodSandboxId:aa59e24748ae75c28731a35e04822a5b1801087aa738b8a978dd04f8f00015a3,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1738721140729940031,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-ds7sg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4858f415-d99e-4fec-b657-8f228d04b580,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66992773c567e1079ad6c192333baa59e7558bb3724eacfc24576cc5f90045e2,PodSandboxId:b03fce6a9308e60751af1c68d5e3d0d720126263c5591699427bcfdfb4b6c31a,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler
:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1738721136417788423,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-76f89f99b5-9mtqz,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 5584f642-888f-4703-8c3f-e93002ebe4da,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25dcba2aff9522fa7c0f93253f8157e956d0b5d51d29d2ef864e1f6cbcab92ef,PodSandboxId:9951417c1b48f463114918ad66e8499b2f21f051b48672b26adb35dd707d53a4,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annota
tions:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1738721108876392220,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-v2g4x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4c9493c-8fa3-44c5-bff3-3a4e156a1233,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f0f89e2c16b2a8059dfea09f984e2dacd06cb56ca30af914ccb843a9798d72e,PodSandboxId:f76a50a891f615139badf5d5b33f41ca6e58149453cbfd279224d07e51c27f6c,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d46
0978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1738721093588921140,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0453f72c-00bd-4d62-99f9-7d6837d37e34,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8a9c4186af4d4f6e1cec7344e02c78ab602a37e7cbd4dc8de0c7f028756a2ab,PodSandboxId:a6751a4244a2161f74b479a68245ff3552d4e6a2fff7f84acb6b38f1e97fa4ec,Metadata:&ContainerMetad
ata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1738721083861022534,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3184710f-e171-4100-8b78-37f5590c9b16,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:729283c3ca5ded343db75b0cbc38887ad74593ce51084cc56a4224d7781521ed,PodSandboxId:24dc300a7adaef42deec540866d556e29bdf25232630e4793a8348caaf21b23d,Metadata:&ContainerMetadata{Name:cor
edns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1738721082575803711,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-c4gpv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55d96cfe-50b0-4fe1-8739-e44f0225d19b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},
},&Container{Id:32bf9f7e6fcd833721bb342cb73b11fa4967e83a01959884d6557237275d7201,PodSandboxId:3669a02541d3a6c2f48a752539b426480ea4b5f37219facd6eaff733491125d4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1738721077989534237,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wmv2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ada9f815-81f2-497a-b88b-78fa4996eda6,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4761e834f1015cc51
83539165618d806967c59a1b79c4c27a9446fe3baf6a741,PodSandboxId:a7ad047701c76f29ffc0835e925491e8fac278f451a1782e1a6bc032baaa7388,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1738721067017505677,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-395572,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edd607d9672b5bc7d360731907ef2cff,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98456be81c1f650ec250f6dea72c5ae1f6bbe435c20b37b59cb0fa3d5996a7fc
,PodSandboxId:dbfc1bf3cc3168438ec55157773af31d8375148e5c7426ac3e7cd5ac5bd19b44,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1738721067033067176,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-395572,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2340fe1b9c528391465b1cad62b880cb,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bc31a7bfadf4a970c0f8f1fa27592b68a5ef7c36a5d177ed7d37ee0aa07fa48,PodSandboxId:a14
94fa084af027108d5f9d3eea8dbbd81639dbe5c39d56d1769c485dfa7b17b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1738721067034840723,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-395572,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 042790824b2bd65942fdb01596ac21d7,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cca032c7f4f6387f2e07cf9b1593b3cf5c8966762b7a5aa46febb37f19ec4897,PodSand
boxId:f11ca2468c83523220ead7c5ead8abca51e6d9d31fc26a194a7b46416607b92e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1738721066952283528,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-395572,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 876ea4b93352a4583ac6ee98e5ebb851,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6f3d86e3-321e-4f1d-89ce-5d43ee837a20 name=/runtime.v1.
RuntimeService/ListContainers
	Feb 05 02:09:12 addons-395572 crio[661]: time="2025-02-05 02:09:12.639904306Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=165be154-a4f2-4ac5-9dab-dc66d2aecc33 name=/runtime.v1.RuntimeService/Status
	Feb 05 02:09:12 addons-395572 crio[661]: time="2025-02-05 02:09:12.639991062Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=165be154-a4f2-4ac5-9dab-dc66d2aecc33 name=/runtime.v1.RuntimeService/Status
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	03cdea88218a1       docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901                              2 minutes ago       Running             nginx                     0                   ee157bcca3515       nginx
	f852e96d5a036       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   911a0083a568c       busybox
	9d3c79d142906       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             3 minutes ago       Running             controller                0                   b21bbc9e1487f       ingress-nginx-controller-56d7c84fd4-9f6qj
	11def4b2bee23       a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb                                                             3 minutes ago       Exited              patch                     2                   c457617cb863c       ingress-nginx-admission-patch-kgpxz
	29f60b3a73818       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   3 minutes ago       Exited              create                    0                   aa59e24748ae7       ingress-nginx-admission-create-ds7sg
	66992773c567e       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             3 minutes ago       Running             local-path-provisioner    0                   b03fce6a9308e       local-path-provisioner-76f89f99b5-9mtqz
	25dcba2aff952       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   9951417c1b48f       amd-gpu-device-plugin-v2g4x
	5f0f89e2c16b2       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             4 minutes ago       Running             minikube-ingress-dns      0                   f76a50a891f61       kube-ingress-dns-minikube
	e8a9c4186af4d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   a6751a4244a21       storage-provisioner
	729283c3ca5de       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             4 minutes ago       Running             coredns                   0                   24dc300a7adae       coredns-668d6bf9bc-c4gpv
	32bf9f7e6fcd8       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a                                                             4 minutes ago       Running             kube-proxy                0                   3669a02541d3a       kube-proxy-wmv2h
	2bc31a7bfadf4       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35                                                             4 minutes ago       Running             kube-controller-manager   0                   a1494fa084af0       kube-controller-manager-addons-395572
	98456be81c1f6       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1                                                             4 minutes ago       Running             kube-scheduler            0                   dbfc1bf3cc316       kube-scheduler-addons-395572
	4761e834f1015       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                             4 minutes ago       Running             etcd                      0                   a7ad047701c76       etcd-addons-395572
	cca032c7f4f63       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a                                                             4 minutes ago       Running             kube-apiserver            0                   f11ca2468c835       kube-apiserver-addons-395572
	
	
	==> coredns [729283c3ca5ded343db75b0cbc38887ad74593ce51084cc56a4224d7781521ed] <==
	[INFO] 10.244.0.7:52963 - 51261 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000480254s
	[INFO] 10.244.0.7:52963 - 50995 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000126254s
	[INFO] 10.244.0.7:52963 - 28004 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000316501s
	[INFO] 10.244.0.7:52963 - 4879 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000090588s
	[INFO] 10.244.0.7:52963 - 23334 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000073265s
	[INFO] 10.244.0.7:52963 - 5846 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000108877s
	[INFO] 10.244.0.7:52963 - 20204 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000090973s
	[INFO] 10.244.0.7:39277 - 11717 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000157089s
	[INFO] 10.244.0.7:39277 - 11433 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000082131s
	[INFO] 10.244.0.7:32802 - 19659 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000099157s
	[INFO] 10.244.0.7:32802 - 19446 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000085618s
	[INFO] 10.244.0.7:46212 - 46220 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000151693s
	[INFO] 10.244.0.7:46212 - 46030 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000074974s
	[INFO] 10.244.0.7:47701 - 45376 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000080216s
	[INFO] 10.244.0.7:47701 - 45221 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000074591s
	[INFO] 10.244.0.23:35376 - 24491 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000468166s
	[INFO] 10.244.0.23:52190 - 48183 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000918797s
	[INFO] 10.244.0.23:55174 - 52969 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000095695s
	[INFO] 10.244.0.23:45849 - 54417 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000167665s
	[INFO] 10.244.0.23:33331 - 18169 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000136933s
	[INFO] 10.244.0.23:49111 - 981 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000071971s
	[INFO] 10.244.0.23:55802 - 8702 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 382 0.001576131s
	[INFO] 10.244.0.23:49530 - 60945 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002133698s
	[INFO] 10.244.0.26:55818 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000599222s
	[INFO] 10.244.0.26:34247 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000107151s
	
	
	==> describe nodes <==
	Name:               addons-395572
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-395572
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e14c106b6cd223b80b8f7425af26284a33f75c3d
	                    minikube.k8s.io/name=addons-395572
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_02_05T02_04_32_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-395572
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 05 Feb 2025 02:04:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-395572
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 05 Feb 2025 02:09:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 05 Feb 2025 02:07:05 +0000   Wed, 05 Feb 2025 02:04:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 05 Feb 2025 02:07:05 +0000   Wed, 05 Feb 2025 02:04:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 05 Feb 2025 02:07:05 +0000   Wed, 05 Feb 2025 02:04:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 05 Feb 2025 02:07:05 +0000   Wed, 05 Feb 2025 02:04:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.234
	  Hostname:    addons-395572
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 05e34367aede4c53ae8fd4f4216cb402
	  System UUID:                05e34367-aede-4c53-ae8f-d4f4216cb402
	  Boot ID:                    de51ead6-af38-45ec-92ed-5e8e1c9624e8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m8s
	  default                     hello-world-app-7d9564db4-jkk42              0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  ingress-nginx               ingress-nginx-controller-56d7c84fd4-9f6qj    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m27s
	  kube-system                 amd-gpu-device-plugin-v2g4x                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m33s
	  kube-system                 coredns-668d6bf9bc-c4gpv                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m35s
	  kube-system                 etcd-addons-395572                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m41s
	  kube-system                 kube-apiserver-addons-395572                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 kube-controller-manager-addons-395572        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m32s
	  kube-system                 kube-proxy-wmv2h                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m35s
	  kube-system                 kube-scheduler-addons-395572                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m30s
	  local-path-storage          local-path-provisioner-76f89f99b5-9mtqz      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m33s  kube-proxy       
	  Normal  Starting                 4m40s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m40s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m40s  kubelet          Node addons-395572 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m40s  kubelet          Node addons-395572 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m40s  kubelet          Node addons-395572 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m39s  kubelet          Node addons-395572 status is now: NodeReady
	  Normal  RegisteredNode           4m36s  node-controller  Node addons-395572 event: Registered Node addons-395572 in Controller
	
	
	==> dmesg <==
	[  +6.226581] systemd-fstab-generator[1216]: Ignoring "noauto" option for root device
	[  +0.079823] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.371445] systemd-fstab-generator[1358]: Ignoring "noauto" option for root device
	[  +0.129868] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.010792] kauditd_printk_skb: 119 callbacks suppressed
	[  +5.009176] kauditd_printk_skb: 127 callbacks suppressed
	[  +6.073826] kauditd_printk_skb: 85 callbacks suppressed
	[Feb 5 02:05] kauditd_printk_skb: 4 callbacks suppressed
	[ +13.897381] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.500103] kauditd_printk_skb: 34 callbacks suppressed
	[  +6.428761] kauditd_printk_skb: 44 callbacks suppressed
	[  +6.557869] kauditd_printk_skb: 47 callbacks suppressed
	[  +5.402264] kauditd_printk_skb: 12 callbacks suppressed
	[Feb 5 02:06] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.768734] kauditd_printk_skb: 7 callbacks suppressed
	[ +12.571063] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.994736] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.020039] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.539300] kauditd_printk_skb: 50 callbacks suppressed
	[  +5.702698] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.053792] kauditd_printk_skb: 54 callbacks suppressed
	[  +6.070171] kauditd_printk_skb: 12 callbacks suppressed
	[Feb 5 02:07] kauditd_printk_skb: 23 callbacks suppressed
	[  +8.578262] kauditd_printk_skb: 7 callbacks suppressed
	[Feb 5 02:09] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [4761e834f1015cc5183539165618d806967c59a1b79c4c27a9446fe3baf6a741] <==
	{"level":"warn","ts":"2025-02-05T02:05:56.606378Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"218.290972ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-05T02:05:56.606453Z","caller":"traceutil/trace.go:171","msg":"trace[1186527607] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1076; }","duration":"218.400489ms","start":"2025-02-05T02:05:56.388046Z","end":"2025-02-05T02:05:56.606446Z","steps":["trace[1186527607] 'agreement among raft nodes before linearized reading'  (duration: 218.249523ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-05T02:05:56.607046Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.390913ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/roles/\" range_end:\"/registry/roles0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2025-02-05T02:05:56.607146Z","caller":"traceutil/trace.go:171","msg":"trace[1878295834] range","detail":"{range_begin:/registry/roles/; range_end:/registry/roles0; response_count:0; response_revision:1076; }","duration":"102.522778ms","start":"2025-02-05T02:05:56.504613Z","end":"2025-02-05T02:05:56.607136Z","steps":["trace[1878295834] 'agreement among raft nodes before linearized reading'  (duration: 102.077655ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-05T02:06:00.007377Z","caller":"traceutil/trace.go:171","msg":"trace[800644246] transaction","detail":"{read_only:false; response_revision:1097; number_of_response:1; }","duration":"447.838651ms","start":"2025-02-05T02:05:59.559523Z","end":"2025-02-05T02:06:00.007361Z","steps":["trace[800644246] 'process raft request'  (duration: 447.633655ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-05T02:06:00.007472Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-05T02:05:59.559506Z","time spent":"447.928144ms","remote":"127.0.0.1:59728","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":486,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" mod_revision:0 > success:<request_put:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" value_size:427 >> failure:<>"}
	{"level":"info","ts":"2025-02-05T02:06:00.007606Z","caller":"traceutil/trace.go:171","msg":"trace[1987692759] linearizableReadLoop","detail":"{readStateIndex:1133; appliedIndex:1133; }","duration":"442.414603ms","start":"2025-02-05T02:05:59.565182Z","end":"2025-02-05T02:06:00.007597Z","steps":["trace[1987692759] 'read index received'  (duration: 442.409647ms)","trace[1987692759] 'applied index is now lower than readState.Index'  (duration: 3.786µs)"],"step_count":2}
	{"level":"warn","ts":"2025-02-05T02:06:00.007735Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"442.542405ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-05T02:06:00.007764Z","caller":"traceutil/trace.go:171","msg":"trace[1949432078] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1097; }","duration":"442.599247ms","start":"2025-02-05T02:05:59.565159Z","end":"2025-02-05T02:06:00.007758Z","steps":["trace[1949432078] 'agreement among raft nodes before linearized reading'  (duration: 442.49824ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-05T02:06:00.007800Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-05T02:05:59.565145Z","time spent":"442.644441ms","remote":"127.0.0.1:59636","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2025-02-05T02:06:00.014809Z","caller":"traceutil/trace.go:171","msg":"trace[1254210691] transaction","detail":"{read_only:false; response_revision:1098; number_of_response:1; }","duration":"415.126229ms","start":"2025-02-05T02:05:59.599671Z","end":"2025-02-05T02:06:00.014797Z","steps":["trace[1254210691] 'process raft request'  (duration: 415.060746ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-05T02:06:00.015046Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-05T02:05:59.599652Z","time spent":"415.265943ms","remote":"127.0.0.1:59546","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":781,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/ingress-nginx/ingress-nginx-controller-56d7c84fd4-9f6qj.18212daeb9389587\" mod_revision:0 > success:<request_put:<key:\"/registry/events/ingress-nginx/ingress-nginx-controller-56d7c84fd4-9f6qj.18212daeb9389587\" value_size:675 lease:41821934169682813 >> failure:<>"}
	{"level":"warn","ts":"2025-02-05T02:06:00.016756Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"408.67233ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" limit:1 ","response":"range_response_count:1 size:554"}
	{"level":"info","ts":"2025-02-05T02:06:00.016806Z","caller":"traceutil/trace.go:171","msg":"trace[1179159727] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1099; }","duration":"408.842942ms","start":"2025-02-05T02:05:59.607955Z","end":"2025-02-05T02:06:00.016798Z","steps":["trace[1179159727] 'agreement among raft nodes before linearized reading'  (duration: 408.746194ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-05T02:06:00.016827Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-05T02:05:59.607940Z","time spent":"408.880365ms","remote":"127.0.0.1:59728","response type":"/etcdserverpb.KV/Range","request count":0,"request size":83,"response count":1,"response size":577,"request content":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" limit:1 "}
	{"level":"warn","ts":"2025-02-05T02:06:00.016935Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"445.732274ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-05T02:06:00.016948Z","caller":"traceutil/trace.go:171","msg":"trace[1541543486] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1099; }","duration":"445.764715ms","start":"2025-02-05T02:05:59.571179Z","end":"2025-02-05T02:06:00.016944Z","steps":["trace[1541543486] 'agreement among raft nodes before linearized reading'  (duration: 445.743079ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-05T02:06:00.016959Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-05T02:05:59.571163Z","time spent":"445.793032ms","remote":"127.0.0.1:59636","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-02-05T02:06:00.017085Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"276.537912ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-kgpxz\" limit:1 ","response":"range_response_count:1 size:4536"}
	{"level":"info","ts":"2025-02-05T02:06:00.017101Z","caller":"traceutil/trace.go:171","msg":"trace[410119960] range","detail":"{range_begin:/registry/pods/ingress-nginx/ingress-nginx-admission-patch-kgpxz; range_end:; response_count:1; response_revision:1098; }","duration":"276.583539ms","start":"2025-02-05T02:05:59.740512Z","end":"2025-02-05T02:06:00.017096Z","steps":["trace[410119960] 'agreement among raft nodes before linearized reading'  (duration: 274.959333ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-05T02:06:43.557547Z","caller":"traceutil/trace.go:171","msg":"trace[1744967978] transaction","detail":"{read_only:false; response_revision:1441; number_of_response:1; }","duration":"100.578737ms","start":"2025-02-05T02:06:43.456949Z","end":"2025-02-05T02:06:43.557528Z","steps":["trace[1744967978] 'process raft request'  (duration: 100.487538ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-05T02:06:44.014497Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"156.413539ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-05T02:06:44.015170Z","caller":"traceutil/trace.go:171","msg":"trace[1277646282] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1441; }","duration":"157.152648ms","start":"2025-02-05T02:06:43.858004Z","end":"2025-02-05T02:06:44.015157Z","steps":["trace[1277646282] 'range keys from in-memory index tree'  (duration: 156.360616ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-05T02:06:44.014610Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.991497ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/test-pvc\" limit:1 ","response":"range_response_count:1 size:1412"}
	{"level":"info","ts":"2025-02-05T02:06:44.015536Z","caller":"traceutil/trace.go:171","msg":"trace[1744403162] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/test-pvc; range_end:; response_count:1; response_revision:1441; }","duration":"126.922295ms","start":"2025-02-05T02:06:43.888604Z","end":"2025-02-05T02:06:44.015527Z","steps":["trace[1744403162] 'range keys from in-memory index tree'  (duration: 125.885022ms)"],"step_count":1}
	
	
	==> kernel <==
	 02:09:12 up 5 min,  0 users,  load average: 1.29, 1.11, 0.55
	Linux addons-395572 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [cca032c7f4f6387f2e07cf9b1593b3cf5c8966762b7a5aa46febb37f19ec4897] <==
	E0205 02:05:30.643565       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.169.116:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.169.116:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.169.116:443: connect: connection refused" logger="UnhandledError"
	I0205 02:05:30.710669       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0205 02:06:13.901658       1 conn.go:339] Error on socket receive: read tcp 192.168.39.234:8443->192.168.39.1:48060: use of closed network connection
	E0205 02:06:14.072797       1 conn.go:339] Error on socket receive: read tcp 192.168.39.234:8443->192.168.39.1:48076: use of closed network connection
	I0205 02:06:23.363238       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.103.18.115"}
	I0205 02:06:31.649092       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0205 02:06:48.287564       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0205 02:06:48.482741       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.106.224.29"}
	I0205 02:06:50.137474       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0205 02:06:50.923645       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	W0205 02:06:51.165190       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0205 02:07:21.783487       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0205 02:07:21.783712       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0205 02:07:21.815545       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0205 02:07:21.815640       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0205 02:07:21.829568       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0205 02:07:21.830299       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0205 02:07:21.857893       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0205 02:07:21.857946       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0205 02:07:21.877449       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0205 02:07:21.877602       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0205 02:07:22.858517       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0205 02:07:22.878111       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0205 02:07:22.914867       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	I0205 02:09:11.513170       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.105.174.125"}
	
	
	==> kube-controller-manager [2bc31a7bfadf4a970c0f8f1fa27592b68a5ef7c36a5d177ed7d37ee0aa07fa48] <==
	W0205 02:08:03.796808       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0205 02:08:03.796882       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0205 02:08:11.138336       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0205 02:08:11.139332       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0205 02:08:11.140284       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0205 02:08:11.140337       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0205 02:08:29.809923       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0205 02:08:29.811085       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotclasses"
	W0205 02:08:29.811946       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0205 02:08:29.812017       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0205 02:08:38.936481       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0205 02:08:38.937416       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0205 02:08:38.938368       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0205 02:08:38.938407       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0205 02:08:45.797306       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0205 02:08:45.798595       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0205 02:08:45.799469       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0205 02:08:45.799507       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0205 02:09:08.551486       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0205 02:09:08.554107       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0205 02:09:08.555153       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0205 02:09:08.555191       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0205 02:09:11.330484       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="38.344906ms"
	I0205 02:09:11.340877       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="10.346861ms"
	I0205 02:09:11.340943       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="31.739µs"
	
	
	==> kube-proxy [32bf9f7e6fcd833721bb342cb73b11fa4967e83a01959884d6557237275d7201] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0205 02:04:38.825120       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0205 02:04:38.840412       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.234"]
	E0205 02:04:38.840506       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0205 02:04:38.926364       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0205 02:04:38.926404       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0205 02:04:38.926428       1 server_linux.go:170] "Using iptables Proxier"
	I0205 02:04:38.928818       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0205 02:04:38.929119       1 server.go:497] "Version info" version="v1.32.1"
	I0205 02:04:38.929131       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0205 02:04:38.933443       1 config.go:199] "Starting service config controller"
	I0205 02:04:38.933462       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0205 02:04:38.933491       1 config.go:105] "Starting endpoint slice config controller"
	I0205 02:04:38.933495       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0205 02:04:38.933866       1 config.go:329] "Starting node config controller"
	I0205 02:04:38.933873       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0205 02:04:39.034306       1 shared_informer.go:320] Caches are synced for node config
	I0205 02:04:39.034379       1 shared_informer.go:320] Caches are synced for service config
	I0205 02:04:39.034392       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [98456be81c1f650ec250f6dea72c5ae1f6bbe435c20b37b59cb0fa3d5996a7fc] <==
	W0205 02:04:29.265940       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0205 02:04:29.267342       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0205 02:04:29.267467       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0205 02:04:29.267502       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0205 02:04:30.139752       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0205 02:04:30.139861       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0205 02:04:30.152483       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0205 02:04:30.152532       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0205 02:04:30.243399       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0205 02:04:30.243455       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0205 02:04:30.325005       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0205 02:04:30.325049       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0205 02:04:30.348396       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0205 02:04:30.348442       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0205 02:04:30.371267       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0205 02:04:30.371371       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0205 02:04:30.392984       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0205 02:04:30.393072       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0205 02:04:30.512466       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0205 02:04:30.512620       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0205 02:04:30.541946       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0205 02:04:30.542276       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0205 02:04:30.667567       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0205 02:04:30.667633       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0205 02:04:32.747000       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 05 02:08:32 addons-395572 kubelet[1223]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 05 02:08:32 addons-395572 kubelet[1223]: E0205 02:08:32.553486    1223 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738721312552854997,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595287,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 05 02:08:32 addons-395572 kubelet[1223]: E0205 02:08:32.553512    1223 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738721312552854997,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595287,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 05 02:08:42 addons-395572 kubelet[1223]: E0205 02:08:42.556031    1223 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738721322555669382,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595287,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 05 02:08:42 addons-395572 kubelet[1223]: E0205 02:08:42.556078    1223 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738721322555669382,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595287,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 05 02:08:52 addons-395572 kubelet[1223]: E0205 02:08:52.560197    1223 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738721332559659353,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595287,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 05 02:08:52 addons-395572 kubelet[1223]: E0205 02:08:52.560271    1223 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738721332559659353,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595287,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 05 02:08:59 addons-395572 kubelet[1223]: I0205 02:08:59.249496    1223 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Feb 05 02:09:02 addons-395572 kubelet[1223]: I0205 02:09:02.252849    1223 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-v2g4x" secret="" err="secret \"gcp-auth\" not found"
	Feb 05 02:09:02 addons-395572 kubelet[1223]: E0205 02:09:02.562912    1223 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738721342562482842,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595287,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 05 02:09:02 addons-395572 kubelet[1223]: E0205 02:09:02.562998    1223 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738721342562482842,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595287,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 05 02:09:11 addons-395572 kubelet[1223]: I0205 02:09:11.320060    1223 memory_manager.go:355] "RemoveStaleState removing state" podUID="f6374d3e-ea9b-4862-aa55-5619fd62c262" containerName="liveness-probe"
	Feb 05 02:09:11 addons-395572 kubelet[1223]: I0205 02:09:11.320492    1223 memory_manager.go:355] "RemoveStaleState removing state" podUID="9c54bfcc-d8d0-407c-bc0a-eeedb77640f2" containerName="task-pv-container"
	Feb 05 02:09:11 addons-395572 kubelet[1223]: I0205 02:09:11.320541    1223 memory_manager.go:355] "RemoveStaleState removing state" podUID="ccd517a5-37f4-46e8-8be2-b3e14d268b2b" containerName="volume-snapshot-controller"
	Feb 05 02:09:11 addons-395572 kubelet[1223]: I0205 02:09:11.320588    1223 memory_manager.go:355] "RemoveStaleState removing state" podUID="63832836-7bed-4292-b28c-3bd2c4682aae" containerName="csi-resizer"
	Feb 05 02:09:11 addons-395572 kubelet[1223]: I0205 02:09:11.320619    1223 memory_manager.go:355] "RemoveStaleState removing state" podUID="008c0274-352d-4f18-a798-24149ea2d7dc" containerName="csi-attacher"
	Feb 05 02:09:11 addons-395572 kubelet[1223]: I0205 02:09:11.320650    1223 memory_manager.go:355] "RemoveStaleState removing state" podUID="f6374d3e-ea9b-4862-aa55-5619fd62c262" containerName="node-driver-registrar"
	Feb 05 02:09:11 addons-395572 kubelet[1223]: I0205 02:09:11.320684    1223 memory_manager.go:355] "RemoveStaleState removing state" podUID="f6374d3e-ea9b-4862-aa55-5619fd62c262" containerName="hostpath"
	Feb 05 02:09:11 addons-395572 kubelet[1223]: I0205 02:09:11.320714    1223 memory_manager.go:355] "RemoveStaleState removing state" podUID="f6374d3e-ea9b-4862-aa55-5619fd62c262" containerName="csi-provisioner"
	Feb 05 02:09:11 addons-395572 kubelet[1223]: I0205 02:09:11.320745    1223 memory_manager.go:355] "RemoveStaleState removing state" podUID="6dd805cb-3ca9-4d3e-8464-1ed77ccd239a" containerName="volume-snapshot-controller"
	Feb 05 02:09:11 addons-395572 kubelet[1223]: I0205 02:09:11.320776    1223 memory_manager.go:355] "RemoveStaleState removing state" podUID="f6374d3e-ea9b-4862-aa55-5619fd62c262" containerName="csi-external-health-monitor-controller"
	Feb 05 02:09:11 addons-395572 kubelet[1223]: I0205 02:09:11.320807    1223 memory_manager.go:355] "RemoveStaleState removing state" podUID="f6374d3e-ea9b-4862-aa55-5619fd62c262" containerName="csi-snapshotter"
	Feb 05 02:09:11 addons-395572 kubelet[1223]: I0205 02:09:11.374416    1223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6jsc\" (UniqueName: \"kubernetes.io/projected/5603b6ef-5c4e-4296-80d6-51a6ce2d8b6d-kube-api-access-s6jsc\") pod \"hello-world-app-7d9564db4-jkk42\" (UID: \"5603b6ef-5c4e-4296-80d6-51a6ce2d8b6d\") " pod="default/hello-world-app-7d9564db4-jkk42"
	Feb 05 02:09:12 addons-395572 kubelet[1223]: E0205 02:09:12.569609    1223 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738721352568795172,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595287,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 05 02:09:12 addons-395572 kubelet[1223]: E0205 02:09:12.569650    1223 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738721352568795172,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595287,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [e8a9c4186af4d4f6e1cec7344e02c78ab602a37e7cbd4dc8de0c7f028756a2ab] <==
	I0205 02:04:44.364067       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0205 02:04:44.417979       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0205 02:04:44.418153       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0205 02:04:44.512327       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0205 02:04:44.512495       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-395572_fee8a57f-9546-400a-9916-d3c3c85294aa!
	I0205 02:04:44.513432       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"11afa4a2-3bb0-4078-b983-da5bc0740fa0", APIVersion:"v1", ResourceVersion:"610", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-395572_fee8a57f-9546-400a-9916-d3c3c85294aa became leader
	I0205 02:04:44.627178       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-395572_fee8a57f-9546-400a-9916-d3c3c85294aa!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-395572 -n addons-395572
helpers_test.go:261: (dbg) Run:  kubectl --context addons-395572 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-7d9564db4-jkk42 ingress-nginx-admission-create-ds7sg ingress-nginx-admission-patch-kgpxz
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-395572 describe pod hello-world-app-7d9564db4-jkk42 ingress-nginx-admission-create-ds7sg ingress-nginx-admission-patch-kgpxz
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-395572 describe pod hello-world-app-7d9564db4-jkk42 ingress-nginx-admission-create-ds7sg ingress-nginx-admission-patch-kgpxz: exit status 1 (66.933655ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-7d9564db4-jkk42
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-395572/192.168.39.234
	Start Time:       Wed, 05 Feb 2025 02:09:11 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=7d9564db4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-7d9564db4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s6jsc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-s6jsc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-7d9564db4-jkk42 to addons-395572
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-ds7sg" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-kgpxz" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-395572 describe pod hello-world-app-7d9564db4-jkk42 ingress-nginx-admission-create-ds7sg ingress-nginx-admission-patch-kgpxz: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-395572 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-395572 addons disable ingress-dns --alsologtostderr -v=1: (1.176870862s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-395572 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-395572 addons disable ingress --alsologtostderr -v=1: (7.679382507s)
--- FAIL: TestAddons/parallel/Ingress (154.61s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (204.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [08a17d82-c3b6-47c4-9da0-e4b26ec25008] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.00424095s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-910650 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-910650 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-910650 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-910650 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [5c9ac106-fdff-44d7-b950-585250658d56] Pending
helpers_test.go:344: "sp-pod" [5c9ac106-fdff-44d7-b950-585250658d56] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [5c9ac106-fdff-44d7-b950-585250658d56] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.003595047s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-910650 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-910650 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-910650 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [022acf76-23c6-4c2d-9eb2-4b386725a4ee] Pending
helpers_test.go:344: "sp-pod" [022acf76-23c6-4c2d-9eb2-4b386725a4ee] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:130: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 3m0s: context deadline exceeded ****
functional_test_pvc_test.go:130: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-910650 -n functional-910650
functional_test_pvc_test.go:130: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-02-05 02:17:31.977555616 +0000 UTC m=+837.888396784
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-910650 describe po sp-pod -n default
functional_test_pvc_test.go:130: (dbg) kubectl --context functional-910650 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-910650/192.168.39.25
Start Time:       Wed, 05 Feb 2025 02:14:31 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ContainerCreating
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9d6fr (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   False 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-9d6fr:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type    Reason     Age   From               Message
----    ------     ----  ----               -------
Normal  Scheduled  3m    default-scheduler  Successfully assigned default/sp-pod to functional-910650
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-910650 logs sp-pod -n default
functional_test_pvc_test.go:130: (dbg) Non-zero exit: kubectl --context functional-910650 logs sp-pod -n default: exit status 1 (67.445011ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: ContainerCreating

                                                
                                                
** /stderr **
functional_test_pvc_test.go:130: kubectl --context functional-910650 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:131: failed waiting for pod: test=storage-provisioner within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-910650 -n functional-910650
helpers_test.go:244: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-910650 logs -n 25: (1.354363899s)
helpers_test.go:252: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	|----------------|-------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|-------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-910650 ssh findmnt                                           | functional-910650 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC | 05 Feb 25 02:14 UTC |
	|                | -T /mount2                                                              |                   |         |         |                     |                     |
	| ssh            | functional-910650 ssh findmnt                                           | functional-910650 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC | 05 Feb 25 02:14 UTC |
	|                | -T /mount3                                                              |                   |         |         |                     |                     |
	| mount          | -p functional-910650                                                    | functional-910650 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC |                     |
	|                | --kill=true                                                             |                   |         |         |                     |                     |
	| image          | functional-910650 image load --daemon                                   | functional-910650 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC | 05 Feb 25 02:14 UTC |
	|                | kicbase/echo-server:functional-910650                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-910650 image ls                                              | functional-910650 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC | 05 Feb 25 02:14 UTC |
	| image          | functional-910650 image load --daemon                                   | functional-910650 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC | 05 Feb 25 02:14 UTC |
	|                | kicbase/echo-server:functional-910650                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-910650 image ls                                              | functional-910650 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC | 05 Feb 25 02:14 UTC |
	| image          | functional-910650 image load --daemon                                   | functional-910650 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC | 05 Feb 25 02:14 UTC |
	|                | kicbase/echo-server:functional-910650                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-910650 image ls                                              | functional-910650 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC | 05 Feb 25 02:14 UTC |
	| image          | functional-910650 image save kicbase/echo-server:functional-910650      | functional-910650 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC | 05 Feb 25 02:14 UTC |
	|                | /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-910650 image rm                                              | functional-910650 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC | 05 Feb 25 02:14 UTC |
	|                | kicbase/echo-server:functional-910650                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-910650 image ls                                              | functional-910650 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC | 05 Feb 25 02:14 UTC |
	| image          | functional-910650 image load                                            | functional-910650 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC | 05 Feb 25 02:14 UTC |
	|                | /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-910650 image ls                                              | functional-910650 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC | 05 Feb 25 02:14 UTC |
	| image          | functional-910650 image save --daemon                                   | functional-910650 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC | 05 Feb 25 02:14 UTC |
	|                | kicbase/echo-server:functional-910650                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| update-context | functional-910650                                                       | functional-910650 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC | 05 Feb 25 02:14 UTC |
	|                | update-context                                                          |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                  |                   |         |         |                     |                     |
	| update-context | functional-910650                                                       | functional-910650 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC | 05 Feb 25 02:14 UTC |
	|                | update-context                                                          |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                  |                   |         |         |                     |                     |
	| update-context | functional-910650                                                       | functional-910650 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC | 05 Feb 25 02:14 UTC |
	|                | update-context                                                          |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                  |                   |         |         |                     |                     |
	| image          | functional-910650                                                       | functional-910650 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC | 05 Feb 25 02:14 UTC |
	|                | image ls --format short                                                 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-910650                                                       | functional-910650 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC | 05 Feb 25 02:14 UTC |
	|                | image ls --format yaml                                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| ssh            | functional-910650 ssh pgrep                                             | functional-910650 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC |                     |
	|                | buildkitd                                                               |                   |         |         |                     |                     |
	| image          | functional-910650                                                       | functional-910650 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC | 05 Feb 25 02:14 UTC |
	|                | image ls --format json                                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-910650 image build -t                                        | functional-910650 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC | 05 Feb 25 02:14 UTC |
	|                | localhost/my-image:functional-910650                                    |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-910650                                                       | functional-910650 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC | 05 Feb 25 02:14 UTC |
	|                | image ls --format table                                                 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-910650 image ls                                              | functional-910650 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC | 05 Feb 25 02:14 UTC |
	|----------------|-------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/05 02:14:21
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0205 02:14:21.703036   27809 out.go:345] Setting OutFile to fd 1 ...
	I0205 02:14:21.703165   27809 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 02:14:21.703178   27809 out.go:358] Setting ErrFile to fd 2...
	I0205 02:14:21.703185   27809 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 02:14:21.703483   27809 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20363-12788/.minikube/bin
	I0205 02:14:21.704217   27809 out.go:352] Setting JSON to false
	I0205 02:14:21.705551   27809 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3413,"bootTime":1738718249,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0205 02:14:21.705678   27809 start.go:139] virtualization: kvm guest
	I0205 02:14:21.707329   27809 out.go:177] * [functional-910650] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0205 02:14:21.708869   27809 notify.go:220] Checking for updates...
	I0205 02:14:21.708890   27809 out.go:177]   - MINIKUBE_LOCATION=20363
	I0205 02:14:21.710062   27809 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0205 02:14:21.711292   27809 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20363-12788/kubeconfig
	I0205 02:14:21.712426   27809 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20363-12788/.minikube
	I0205 02:14:21.713574   27809 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0205 02:14:21.714724   27809 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0205 02:14:21.716156   27809 config.go:182] Loaded profile config "functional-910650": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 02:14:21.716531   27809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:14:21.716590   27809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:14:21.732255   27809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40655
	I0205 02:14:21.732651   27809 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:14:21.733245   27809 main.go:141] libmachine: Using API Version  1
	I0205 02:14:21.733270   27809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:14:21.733659   27809 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:14:21.733897   27809 main.go:141] libmachine: (functional-910650) Calling .DriverName
	I0205 02:14:21.734191   27809 driver.go:394] Setting default libvirt URI to qemu:///system
	I0205 02:14:21.734658   27809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:14:21.734713   27809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:14:21.751373   27809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42367
	I0205 02:14:21.751868   27809 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:14:21.752355   27809 main.go:141] libmachine: Using API Version  1
	I0205 02:14:21.752370   27809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:14:21.752706   27809 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:14:21.752879   27809 main.go:141] libmachine: (functional-910650) Calling .DriverName
	I0205 02:14:21.797315   27809 out.go:177] * Using the kvm2 driver based on existing profile
	I0205 02:14:21.798487   27809 start.go:297] selected driver: kvm2
	I0205 02:14:21.798505   27809 start.go:901] validating driver "kvm2" against &{Name:functional-910650 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-910650 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.25 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0205 02:14:21.798644   27809 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0205 02:14:21.800035   27809 cni.go:84] Creating CNI manager for ""
	I0205 02:14:21.800105   27809 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0205 02:14:21.800202   27809 start.go:340] cluster config:
	{Name:functional-910650 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-910650 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.25 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0205 02:14:21.802231   27809 out.go:177] * dry-run validation complete!
	
	
	==> CRI-O <==
	Feb 05 02:17:32 functional-910650 crio[4189]: time="2025-02-05 02:17:32.781552016Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a0d4ebd9-43f9-42a5-8cea-83a47db5a793 name=/runtime.v1.RuntimeService/Version
	Feb 05 02:17:32 functional-910650 crio[4189]: time="2025-02-05 02:17:32.782612676Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f3c0b65f-6712-4230-8fe4-6f0993fc9d17 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 02:17:32 functional-910650 crio[4189]: time="2025-02-05 02:17:32.783316510Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738721852783296605,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:281112,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f3c0b65f-6712-4230-8fe4-6f0993fc9d17 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 02:17:32 functional-910650 crio[4189]: time="2025-02-05 02:17:32.783782947Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=879445a8-4c51-4073-a418-5fabeceb14e1 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 02:17:32 functional-910650 crio[4189]: time="2025-02-05 02:17:32.783846895Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=879445a8-4c51-4073-a418-5fabeceb14e1 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 02:17:32 functional-910650 crio[4189]: time="2025-02-05 02:17:32.784187483Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:78b5138b875a772a7063d3e2217fe27d2de180bf614e79b7eedf7316847399ab,PodSandboxId:8246285bb99b7966eaa53a9a72b7b60767da17ff4185e39c7716cf1a77976e14,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1738721686243734114,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-58ccfd96bb-dq6dw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef4e7751-e58b-4b47-a709-8c168fcf136a,},Annotations:map[string]string{io.kubernetes.container.hash: a60d665,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"co
ntainerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:644be7f12ed065b2702594d40d3094c1382681f7cb18501a0124174a1cede414,PodSandboxId:f516c9b35bd9e8aa88e15791c68063644d9667461f2b4318f0f98760478c556d,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1738721674011910744,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-mhz29,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7fcf7ac0-
66dc-4a69-b059-9802b365ca98,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b0bde1fadaddb954a00d22b1d87f45f11e1e91c0a68747aeb43ff15aa05c09d,PodSandboxId:83196fc6e47d5c2efbc5480870f8fc55bb2ea34a1cafb9e1133a92f365534bf1,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1738721667255660344,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-
scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-5d59dccf9b-k2kzr,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 17611628-6a3d-40e5-a811-7ee4b4a9ddbd,},Annotations:map[string]string{io.kubernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4c5834ef4945982ac0f60a01869afded58652eec5be22508419d40ad1c93126,PodSandboxId:bf38d2769bbe7afeba4e2e7e981326eede1f9e983e9858091adf8c27ca433b16,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c02
0289c,State:CONTAINER_EXITED,CreatedAt:1738721656086441690,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 953b0a2f-4dbb-4fcc-85c1-f51d97e97e61,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bc6adf7c709b8c964181ce9729ef57bf8e3a9cbc80191d2df3d9707f414ca0d,PodSandboxId:ac4d054666c77e1a962dccba829c547c3525f6ff50fd605e499e63dbeb60c967,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
,State:CONTAINER_RUNNING,CreatedAt:1738721653579188411,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-fcfd88b6f-wrkgl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 86972029-8f91-48c5-8edf-8aa85ed26ca7,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eda8b10fd71d9f21f970daf5dd2c387a2e1936636c723ca133f79ed9e9d66a5b,PodSandboxId:a2ad07eaa16c0bce6c4b562c50d6f2f75a2055ab9fb4af5c35ff05aefa9acc83,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605
a538410,State:CONTAINER_RUNNING,CreatedAt:1738721653489439452,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-58f9cf68d8-k6qhb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5dd71066-2b27-4030-ad19-eec45fdb7bac,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c711ebfd415d07af2dbc8d9e2241d809a614cc97b9cff2fb0354752e86b8917,PodSandboxId:e2561b0a0b91713fe7006e5c03222cbad68defe537fac988a21f58f534712fbb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONT
AINER_RUNNING,CreatedAt:1738721625845869720,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-88qpw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d59d5c0-03f1-421e-9809-9c010a4a6282,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b93939ed04e4f6a26930d5d93af594dda472e1048c13ba6406fb596b56e57968,PodSandboxId:543f0168053f9cb5f374e18a63a6a5dce128efad894afb57b42eda529009154c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:
1738721625823676164,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08a17d82-c3b6-47c4-9da0-e4b26ec25008,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4859144feeb41514390369b071ffaa13ac2f05a9026d9c314320cda965fbef4b,PodSandboxId:bf141d9d4a10c965cb168762a9788e3650fb1bf9f943136bea2fc7cb4d4e8a99,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1738721625849269380,Labe
ls:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-nmxv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 247a5c29-8764-4782-ae98-364cdfde9beb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34b63fc06270bd4f4787320a89c1d86cdcbffbe38a9a0af59cc002a1c6115e53,PodSandboxId:e0a4c9519f8751946dc764e955a7d6f6239683380d7be85479b7427f16b90e5f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,
Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1738721622339757725,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-910650,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 079c3736255cc8d2c02aa70d294c0491,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2186b22be5a178d033842884a0b7b70fbafb2b51c014dcce5c6a661cf2d8200,PodSandboxId:707eed2b9c556ec3d4488ac21d82491f12e8625501c3f43fb525f2de68867e51,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1738721622157664115,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-910650,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3073fa6296fe10e8ed4ee220ea36bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af4e3b95b4a4d42dff82f80e07c3d4757852331255b6759159e88511d44a5bbd,PodSandboxId:221e79efeaacfba9e6a752b92e52696140694347b52b4df726f21786d494439f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,
Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1738721622167165261,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-910650,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1e39b1c0d320386380cb40ab221bd88,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4791a6b52e420c6b24d64b6a1131379826766c8ecc6f2c0b80c430f349589c7,PodSandboxId:00ef679aa96e7a4ac62cbdefbeed369c319b31816c02c0b2f7fce10f9fccbec1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1738721622137491148,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-910650,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 473e65733f7294c10e7cbbb013aae69c,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac20a1598bd6a8684f6eda37ee8319dba42869b26ff7fd278b345bbb801774ce,PodSandboxId:252be47a5347ec3004924811fa4502a3ec3cd9e22ff9e12e3f9d9a5596a34f19,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1738721584722910352,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-nmxv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 247a5c29-8764-4782-ae98-364cdfde9beb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45a1536465e4c28d9ec4134af500dbe93c78da372ecb562b568952278deb87be,PodSandboxId:edffae40827327acbfcf362b6b40ca4ec3ec365c6fbb710fb0f010e412f32c8f,Metadata:&
ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1738721584289549102,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-88qpw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d59d5c0-03f1-421e-9809-9c010a4a6282,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e79d65a368ef1c6eb02768657bfa8d84264f9e629df72c0e5259231ba4f2c05,PodSandboxId:7276b1d34ea5beaaa768c41eef61c70789c9528e8b8aa05cf7baff3479657a34,Metadata:&ContainerMetadata{Name:storage-pro
visioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1738721584240770674,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08a17d82-c3b6-47c4-9da0-e4b26ec25008,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a254ce2946e24c21d7510fbe9dee57557272200a3dfb5dc0431cc9294b0001ae,PodSandboxId:a464e6f032f6c2be3b648e1dde90f35e393cba4f63880b0cf7bbbb120edf28b5,Metadata:&ContainerMetadata{Name:kube-controller-manager,
Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1738721580503952306,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-910650,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3073fa6296fe10e8ed4ee220ea36bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98af606a04faa861c56f4063ce1b50abba73dbd601d761d4db162ee10ef18a04,PodSandboxId:e10f9b20be5542b570d02712703dedf589ab5a536c0024758620f50bb6eab000,Metadata:&ContainerMetadata{Name:kube-schedul
er,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1738721580450788018,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-910650,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1e39b1c0d320386380cb40ab221bd88,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed785972503d44c9f59a2ad848d64ed8a90dca50320d73ad31d376081ada8940,PodSandboxId:d4280d93cd564ab43e5a7c8f5dd41847ca8dcb091c1bbc677dc4ad7c80e70bed,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&Im
ageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1738721580399622076,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-910650,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 473e65733f7294c10e7cbbb013aae69c,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=879445a8-4c51-4073-a418-5fabeceb14e1 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 02:17:32 functional-910650 crio[4189]: time="2025-02-05 02:17:32.813501289Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7461d4d3-2703-4589-b6cb-5200a383e72a name=/runtime.v1.RuntimeService/Version
	Feb 05 02:17:32 functional-910650 crio[4189]: time="2025-02-05 02:17:32.813567935Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7461d4d3-2703-4589-b6cb-5200a383e72a name=/runtime.v1.RuntimeService/Version
	Feb 05 02:17:32 functional-910650 crio[4189]: time="2025-02-05 02:17:32.814553432Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5fec7641-348c-415f-aa35-a09c2e8193d0 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 02:17:32 functional-910650 crio[4189]: time="2025-02-05 02:17:32.815257938Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738721852815236990,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:281112,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5fec7641-348c-415f-aa35-a09c2e8193d0 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 02:17:32 functional-910650 crio[4189]: time="2025-02-05 02:17:32.815937318Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=472d65d1-7cce-4912-9ddb-d0c5a68d2ff2 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 02:17:32 functional-910650 crio[4189]: time="2025-02-05 02:17:32.815987686Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=472d65d1-7cce-4912-9ddb-d0c5a68d2ff2 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 02:17:32 functional-910650 crio[4189]: time="2025-02-05 02:17:32.816400823Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:78b5138b875a772a7063d3e2217fe27d2de180bf614e79b7eedf7316847399ab,PodSandboxId:8246285bb99b7966eaa53a9a72b7b60767da17ff4185e39c7716cf1a77976e14,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1738721686243734114,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-58ccfd96bb-dq6dw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef4e7751-e58b-4b47-a709-8c168fcf136a,},Annotations:map[string]string{io.kubernetes.container.hash: a60d665,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"co
ntainerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:644be7f12ed065b2702594d40d3094c1382681f7cb18501a0124174a1cede414,PodSandboxId:f516c9b35bd9e8aa88e15791c68063644d9667461f2b4318f0f98760478c556d,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1738721674011910744,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-mhz29,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7fcf7ac0-
66dc-4a69-b059-9802b365ca98,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b0bde1fadaddb954a00d22b1d87f45f11e1e91c0a68747aeb43ff15aa05c09d,PodSandboxId:83196fc6e47d5c2efbc5480870f8fc55bb2ea34a1cafb9e1133a92f365534bf1,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1738721667255660344,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-
scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-5d59dccf9b-k2kzr,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 17611628-6a3d-40e5-a811-7ee4b4a9ddbd,},Annotations:map[string]string{io.kubernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4c5834ef4945982ac0f60a01869afded58652eec5be22508419d40ad1c93126,PodSandboxId:bf38d2769bbe7afeba4e2e7e981326eede1f9e983e9858091adf8c27ca433b16,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c02
0289c,State:CONTAINER_EXITED,CreatedAt:1738721656086441690,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 953b0a2f-4dbb-4fcc-85c1-f51d97e97e61,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bc6adf7c709b8c964181ce9729ef57bf8e3a9cbc80191d2df3d9707f414ca0d,PodSandboxId:ac4d054666c77e1a962dccba829c547c3525f6ff50fd605e499e63dbeb60c967,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
,State:CONTAINER_RUNNING,CreatedAt:1738721653579188411,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-fcfd88b6f-wrkgl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 86972029-8f91-48c5-8edf-8aa85ed26ca7,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eda8b10fd71d9f21f970daf5dd2c387a2e1936636c723ca133f79ed9e9d66a5b,PodSandboxId:a2ad07eaa16c0bce6c4b562c50d6f2f75a2055ab9fb4af5c35ff05aefa9acc83,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605
a538410,State:CONTAINER_RUNNING,CreatedAt:1738721653489439452,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-58f9cf68d8-k6qhb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5dd71066-2b27-4030-ad19-eec45fdb7bac,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c711ebfd415d07af2dbc8d9e2241d809a614cc97b9cff2fb0354752e86b8917,PodSandboxId:e2561b0a0b91713fe7006e5c03222cbad68defe537fac988a21f58f534712fbb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONT
AINER_RUNNING,CreatedAt:1738721625845869720,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-88qpw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d59d5c0-03f1-421e-9809-9c010a4a6282,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b93939ed04e4f6a26930d5d93af594dda472e1048c13ba6406fb596b56e57968,PodSandboxId:543f0168053f9cb5f374e18a63a6a5dce128efad894afb57b42eda529009154c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:
1738721625823676164,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08a17d82-c3b6-47c4-9da0-e4b26ec25008,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4859144feeb41514390369b071ffaa13ac2f05a9026d9c314320cda965fbef4b,PodSandboxId:bf141d9d4a10c965cb168762a9788e3650fb1bf9f943136bea2fc7cb4d4e8a99,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1738721625849269380,Labe
ls:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-nmxv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 247a5c29-8764-4782-ae98-364cdfde9beb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34b63fc06270bd4f4787320a89c1d86cdcbffbe38a9a0af59cc002a1c6115e53,PodSandboxId:e0a4c9519f8751946dc764e955a7d6f6239683380d7be85479b7427f16b90e5f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,
Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1738721622339757725,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-910650,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 079c3736255cc8d2c02aa70d294c0491,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2186b22be5a178d033842884a0b7b70fbafb2b51c014dcce5c6a661cf2d8200,PodSandboxId:707eed2b9c556ec3d4488ac21d82491f12e8625501c3f43fb525f2de68867e51,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1738721622157664115,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-910650,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3073fa6296fe10e8ed4ee220ea36bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af4e3b95b4a4d42dff82f80e07c3d4757852331255b6759159e88511d44a5bbd,PodSandboxId:221e79efeaacfba9e6a752b92e52696140694347b52b4df726f21786d494439f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,
Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1738721622167165261,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-910650,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1e39b1c0d320386380cb40ab221bd88,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4791a6b52e420c6b24d64b6a1131379826766c8ecc6f2c0b80c430f349589c7,PodSandboxId:00ef679aa96e7a4ac62cbdefbeed369c319b31816c02c0b2f7fce10f9fccbec1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1738721622137491148,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-910650,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 473e65733f7294c10e7cbbb013aae69c,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac20a1598bd6a8684f6eda37ee8319dba42869b26ff7fd278b345bbb801774ce,PodSandboxId:252be47a5347ec3004924811fa4502a3ec3cd9e22ff9e12e3f9d9a5596a34f19,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1738721584722910352,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-nmxv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 247a5c29-8764-4782-ae98-364cdfde9beb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45a1536465e4c28d9ec4134af500dbe93c78da372ecb562b568952278deb87be,PodSandboxId:edffae40827327acbfcf362b6b40ca4ec3ec365c6fbb710fb0f010e412f32c8f,Metadata:&
ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1738721584289549102,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-88qpw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d59d5c0-03f1-421e-9809-9c010a4a6282,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e79d65a368ef1c6eb02768657bfa8d84264f9e629df72c0e5259231ba4f2c05,PodSandboxId:7276b1d34ea5beaaa768c41eef61c70789c9528e8b8aa05cf7baff3479657a34,Metadata:&ContainerMetadata{Name:storage-pro
visioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1738721584240770674,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08a17d82-c3b6-47c4-9da0-e4b26ec25008,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a254ce2946e24c21d7510fbe9dee57557272200a3dfb5dc0431cc9294b0001ae,PodSandboxId:a464e6f032f6c2be3b648e1dde90f35e393cba4f63880b0cf7bbbb120edf28b5,Metadata:&ContainerMetadata{Name:kube-controller-manager,
Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1738721580503952306,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-910650,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3073fa6296fe10e8ed4ee220ea36bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98af606a04faa861c56f4063ce1b50abba73dbd601d761d4db162ee10ef18a04,PodSandboxId:e10f9b20be5542b570d02712703dedf589ab5a536c0024758620f50bb6eab000,Metadata:&ContainerMetadata{Name:kube-schedul
er,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1738721580450788018,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-910650,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1e39b1c0d320386380cb40ab221bd88,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed785972503d44c9f59a2ad848d64ed8a90dca50320d73ad31d376081ada8940,PodSandboxId:d4280d93cd564ab43e5a7c8f5dd41847ca8dcb091c1bbc677dc4ad7c80e70bed,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&Im
ageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1738721580399622076,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-910650,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 473e65733f7294c10e7cbbb013aae69c,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=472d65d1-7cce-4912-9ddb-d0c5a68d2ff2 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 02:17:32 functional-910650 crio[4189]: time="2025-02-05 02:17:32.825210457Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=6bc0029e-2b72-47bc-887e-070c614d59f2 name=/runtime.v1.RuntimeService/ListPodSandbox
	Feb 05 02:17:32 functional-910650 crio[4189]: time="2025-02-05 02:17:32.825970372Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:8246285bb99b7966eaa53a9a72b7b60767da17ff4185e39c7716cf1a77976e14,Metadata:&PodSandboxMetadata{Name:mysql-58ccfd96bb-dq6dw,Uid:ef4e7751-e58b-4b47-a709-8c168fcf136a,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1738721664183863895,Labels:map[string]string{app: mysql,io.kubernetes.container.name: POD,io.kubernetes.pod.name: mysql-58ccfd96bb-dq6dw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef4e7751-e58b-4b47-a709-8c168fcf136a,pod-template-hash: 58ccfd96bb,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-02-05T02:14:23.872753267Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f516c9b35bd9e8aa88e15791c68063644d9667461f2b4318f0f98760478c556d,Metadata:&PodSandboxMetadata{Name:kubernetes-dashboard-7779f9b69b-mhz29,Uid:7fcf7ac0-66dc-4a69-b059-9802b365ca98,Namespace:kubernetes-d
ashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1738721664084299507,Labels:map[string]string{gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-mhz29,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7fcf7ac0-66dc-4a69-b059-9802b365ca98,k8s-app: kubernetes-dashboard,pod-template-hash: 7779f9b69b,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-02-05T02:14:23.765312948Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:83196fc6e47d5c2efbc5480870f8fc55bb2ea34a1cafb9e1133a92f365534bf1,Metadata:&PodSandboxMetadata{Name:dashboard-metrics-scraper-5d59dccf9b-k2kzr,Uid:17611628-6a3d-40e5-a811-7ee4b4a9ddbd,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1738721664059987480,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: dashboard-metrics-scraper-5d59dccf9b-k2kzr,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 17
611628-6a3d-40e5-a811-7ee4b4a9ddbd,k8s-app: dashboard-metrics-scraper,pod-template-hash: 5d59dccf9b,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-02-05T02:14:23.745990201Z,kubernetes.io/config.source: api,seccomp.security.alpha.kubernetes.io/pod: runtime/default,},RuntimeHandler:,},&PodSandbox{Id:bf38d2769bbe7afeba4e2e7e981326eede1f9e983e9858091adf8c27ca433b16,Metadata:&PodSandboxMetadata{Name:busybox-mount,Uid:953b0a2f-4dbb-4fcc-85c1-f51d97e97e61,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1738721653406181198,Labels:map[string]string{integration-test: busybox-mount,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 953b0a2f-4dbb-4fcc-85c1-f51d97e97e61,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-02-05T02:14:12.998585260Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ac4d054666c77e1a962dccba829c547c3525f6ff50fd605e499e63dbeb60c967,Metadata:&PodSandboxMe
tadata{Name:hello-node-fcfd88b6f-wrkgl,Uid:86972029-8f91-48c5-8edf-8aa85ed26ca7,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1738721650595931872,Labels:map[string]string{app: hello-node,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-node-fcfd88b6f-wrkgl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 86972029-8f91-48c5-8edf-8aa85ed26ca7,pod-template-hash: fcfd88b6f,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-02-05T02:14:10.285029203Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a2ad07eaa16c0bce6c4b562c50d6f2f75a2055ab9fb4af5c35ff05aefa9acc83,Metadata:&PodSandboxMetadata{Name:hello-node-connect-58f9cf68d8-k6qhb,Uid:5dd71066-2b27-4030-ad19-eec45fdb7bac,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1738721650049957956,Labels:map[string]string{app: hello-node-connect,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-node-connect-58f9cf68d8-k6qhb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.
uid: 5dd71066-2b27-4030-ad19-eec45fdb7bac,pod-template-hash: 58f9cf68d8,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-02-05T02:14:09.743617213Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e0a4c9519f8751946dc764e955a7d6f6239683380d7be85479b7427f16b90e5f,Metadata:&PodSandboxMetadata{Name:kube-apiserver-functional-910650,Uid:079c3736255cc8d2c02aa70d294c0491,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1738721622139082238,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-functional-910650,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 079c3736255cc8d2c02aa70d294c0491,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.25:8441,kubernetes.io/config.hash: 079c3736255cc8d2c02aa70d294c0491,kubernetes.io/config.seen: 2025-02-05T02:13:41.483765841Z,kubernetes.io/config.source: file,},RuntimeHandl
er:,},&PodSandbox{Id:bf141d9d4a10c965cb168762a9788e3650fb1bf9f943136bea2fc7cb4d4e8a99,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-nmxv8,Uid:247a5c29-8764-4782-ae98-364cdfde9beb,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1738721619389461688,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-nmxv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 247a5c29-8764-4782-ae98-364cdfde9beb,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-02-05T02:13:03.773200282Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:221e79efeaacfba9e6a752b92e52696140694347b52b4df726f21786d494439f,Metadata:&PodSandboxMetadata{Name:kube-scheduler-functional-910650,Uid:a1e39b1c0d320386380cb40ab221bd88,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1738721619271415790,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,i
o.kubernetes.pod.name: kube-scheduler-functional-910650,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1e39b1c0d320386380cb40ab221bd88,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a1e39b1c0d320386380cb40ab221bd88,kubernetes.io/config.seen: 2025-02-05T02:12:59.782421236Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e2561b0a0b91713fe7006e5c03222cbad68defe537fac988a21f58f534712fbb,Metadata:&PodSandboxMetadata{Name:kube-proxy-88qpw,Uid:5d59d5c0-03f1-421e-9809-9c010a4a6282,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1738721619242892365,Labels:map[string]string{controller-revision-hash: 566d7b9f85,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-88qpw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d59d5c0-03f1-421e-9809-9c010a4a6282,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-02-05T02:13:03.773209815Z,kubernetes.io/co
nfig.source: api,},RuntimeHandler:,},&PodSandbox{Id:543f0168053f9cb5f374e18a63a6a5dce128efad894afb57b42eda529009154c,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:08a17d82-c3b6-47c4-9da0-e4b26ec25008,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1738721619218157408,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08a17d82-c3b6-47c4-9da0-e4b26ec25008,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisione
r:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-02-05T02:13:03.773212265Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:707eed2b9c556ec3d4488ac21d82491f12e8625501c3f43fb525f2de68867e51,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-functional-910650,Uid:d3073fa6296fe10e8ed4ee220ea36bc7,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1738721619086609104,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-functional-910650,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3073fa6296fe10e8ed4ee220ea36bc7,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash:
d3073fa6296fe10e8ed4ee220ea36bc7,kubernetes.io/config.seen: 2025-02-05T02:12:59.782420018Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:00ef679aa96e7a4ac62cbdefbeed369c319b31816c02c0b2f7fce10f9fccbec1,Metadata:&PodSandboxMetadata{Name:etcd-functional-910650,Uid:473e65733f7294c10e7cbbb013aae69c,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1738721619058849600,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-functional-910650,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 473e65733f7294c10e7cbbb013aae69c,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.25:2379,kubernetes.io/config.hash: 473e65733f7294c10e7cbbb013aae69c,kubernetes.io/config.seen: 2025-02-05T02:12:59.782413000Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:252be47a5347ec3004924811fa4502a3ec3cd9e22ff9e12e3f9d9a5596a34f19,Metadata:&PodSandbox
Metadata{Name:coredns-668d6bf9bc-nmxv8,Uid:247a5c29-8764-4782-ae98-364cdfde9beb,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1738721584268162424,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-nmxv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 247a5c29-8764-4782-ae98-364cdfde9beb,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-02-05T02:13:03.773200282Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:edffae40827327acbfcf362b6b40ca4ec3ec365c6fbb710fb0f010e412f32c8f,Metadata:&PodSandboxMetadata{Name:kube-proxy-88qpw,Uid:5d59d5c0-03f1-421e-9809-9c010a4a6282,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1738721584112416713,Labels:map[string]string{controller-revision-hash: 566d7b9f85,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-88qpw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 5d59d5c0-03f1-421e-9809-9c010a4a6282,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-02-05T02:13:03.773209815Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7276b1d34ea5beaaa768c41eef61c70789c9528e8b8aa05cf7baff3479657a34,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:08a17d82-c3b6-47c4-9da0-e4b26ec25008,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1738721584105785749,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08a17d82-c3b6-47c4-9da0-e4b26ec25008,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage
-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-02-05T02:13:03.773212265Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e10f9b20be5542b570d02712703dedf589ab5a536c0024758620f50bb6eab000,Metadata:&PodSandboxMetadata{Name:kube-scheduler-functional-910650,Uid:a1e39b1c0d320386380cb40ab221bd88,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1738721580274669475,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-functional-910650,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1e39b1c0d320386380cb40ab221bd88,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a1e39b1c0d320386380cb40ab221bd88,kubernetes.io/config.seen: 2025-02-05T02:12:59.782421236Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a464e6f032f6c2be3b648e1dde90f35e393cba4f63880b0cf7bbbb120edf28b5,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-functional-910650,Uid:d3073fa6296fe10e8ed4ee220ea36bc7,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1738721580254665375,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-functional-910650,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3073fa6296fe10e8ed4ee220ea36bc7,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d3073fa6296fe10e8ed4ee220ea36bc7,kubernetes.io/config.seen: 2025-02-05T02:12:59.782420018Z
,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d4280d93cd564ab43e5a7c8f5dd41847ca8dcb091c1bbc677dc4ad7c80e70bed,Metadata:&PodSandboxMetadata{Name:etcd-functional-910650,Uid:473e65733f7294c10e7cbbb013aae69c,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1738721580235629397,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-functional-910650,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 473e65733f7294c10e7cbbb013aae69c,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.25:2379,kubernetes.io/config.hash: 473e65733f7294c10e7cbbb013aae69c,kubernetes.io/config.seen: 2025-02-05T02:12:59.782413000Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=6bc0029e-2b72-47bc-887e-070c614d59f2 name=/runtime.v1.RuntimeService/ListPodSandbox
	Feb 05 02:17:32 functional-910650 crio[4189]: time="2025-02-05 02:17:32.826792563Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2b2553cc-2581-441c-bd73-7773728ae36d name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 02:17:32 functional-910650 crio[4189]: time="2025-02-05 02:17:32.826914625Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2b2553cc-2581-441c-bd73-7773728ae36d name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 02:17:32 functional-910650 crio[4189]: time="2025-02-05 02:17:32.827259880Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:78b5138b875a772a7063d3e2217fe27d2de180bf614e79b7eedf7316847399ab,PodSandboxId:8246285bb99b7966eaa53a9a72b7b60767da17ff4185e39c7716cf1a77976e14,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1738721686243734114,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-58ccfd96bb-dq6dw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef4e7751-e58b-4b47-a709-8c168fcf136a,},Annotations:map[string]string{io.kubernetes.container.hash: a60d665,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"co
ntainerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:644be7f12ed065b2702594d40d3094c1382681f7cb18501a0124174a1cede414,PodSandboxId:f516c9b35bd9e8aa88e15791c68063644d9667461f2b4318f0f98760478c556d,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1738721674011910744,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-mhz29,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7fcf7ac0-
66dc-4a69-b059-9802b365ca98,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b0bde1fadaddb954a00d22b1d87f45f11e1e91c0a68747aeb43ff15aa05c09d,PodSandboxId:83196fc6e47d5c2efbc5480870f8fc55bb2ea34a1cafb9e1133a92f365534bf1,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1738721667255660344,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-
scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-5d59dccf9b-k2kzr,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 17611628-6a3d-40e5-a811-7ee4b4a9ddbd,},Annotations:map[string]string{io.kubernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4c5834ef4945982ac0f60a01869afded58652eec5be22508419d40ad1c93126,PodSandboxId:bf38d2769bbe7afeba4e2e7e981326eede1f9e983e9858091adf8c27ca433b16,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c02
0289c,State:CONTAINER_EXITED,CreatedAt:1738721656086441690,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 953b0a2f-4dbb-4fcc-85c1-f51d97e97e61,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bc6adf7c709b8c964181ce9729ef57bf8e3a9cbc80191d2df3d9707f414ca0d,PodSandboxId:ac4d054666c77e1a962dccba829c547c3525f6ff50fd605e499e63dbeb60c967,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
,State:CONTAINER_RUNNING,CreatedAt:1738721653579188411,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-fcfd88b6f-wrkgl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 86972029-8f91-48c5-8edf-8aa85ed26ca7,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eda8b10fd71d9f21f970daf5dd2c387a2e1936636c723ca133f79ed9e9d66a5b,PodSandboxId:a2ad07eaa16c0bce6c4b562c50d6f2f75a2055ab9fb4af5c35ff05aefa9acc83,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605
a538410,State:CONTAINER_RUNNING,CreatedAt:1738721653489439452,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-58f9cf68d8-k6qhb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5dd71066-2b27-4030-ad19-eec45fdb7bac,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c711ebfd415d07af2dbc8d9e2241d809a614cc97b9cff2fb0354752e86b8917,PodSandboxId:e2561b0a0b91713fe7006e5c03222cbad68defe537fac988a21f58f534712fbb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONT
AINER_RUNNING,CreatedAt:1738721625845869720,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-88qpw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d59d5c0-03f1-421e-9809-9c010a4a6282,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b93939ed04e4f6a26930d5d93af594dda472e1048c13ba6406fb596b56e57968,PodSandboxId:543f0168053f9cb5f374e18a63a6a5dce128efad894afb57b42eda529009154c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:
1738721625823676164,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08a17d82-c3b6-47c4-9da0-e4b26ec25008,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4859144feeb41514390369b071ffaa13ac2f05a9026d9c314320cda965fbef4b,PodSandboxId:bf141d9d4a10c965cb168762a9788e3650fb1bf9f943136bea2fc7cb4d4e8a99,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1738721625849269380,Labe
ls:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-nmxv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 247a5c29-8764-4782-ae98-364cdfde9beb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34b63fc06270bd4f4787320a89c1d86cdcbffbe38a9a0af59cc002a1c6115e53,PodSandboxId:e0a4c9519f8751946dc764e955a7d6f6239683380d7be85479b7427f16b90e5f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,
Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1738721622339757725,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-910650,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 079c3736255cc8d2c02aa70d294c0491,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2186b22be5a178d033842884a0b7b70fbafb2b51c014dcce5c6a661cf2d8200,PodSandboxId:707eed2b9c556ec3d4488ac21d82491f12e8625501c3f43fb525f2de68867e51,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1738721622157664115,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-910650,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3073fa6296fe10e8ed4ee220ea36bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af4e3b95b4a4d42dff82f80e07c3d4757852331255b6759159e88511d44a5bbd,PodSandboxId:221e79efeaacfba9e6a752b92e52696140694347b52b4df726f21786d494439f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,
Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1738721622167165261,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-910650,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1e39b1c0d320386380cb40ab221bd88,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4791a6b52e420c6b24d64b6a1131379826766c8ecc6f2c0b80c430f349589c7,PodSandboxId:00ef679aa96e7a4ac62cbdefbeed369c319b31816c02c0b2f7fce10f9fccbec1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1738721622137491148,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-910650,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 473e65733f7294c10e7cbbb013aae69c,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac20a1598bd6a8684f6eda37ee8319dba42869b26ff7fd278b345bbb801774ce,PodSandboxId:252be47a5347ec3004924811fa4502a3ec3cd9e22ff9e12e3f9d9a5596a34f19,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1738721584722910352,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-nmxv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 247a5c29-8764-4782-ae98-364cdfde9beb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45a1536465e4c28d9ec4134af500dbe93c78da372ecb562b568952278deb87be,PodSandboxId:edffae40827327acbfcf362b6b40ca4ec3ec365c6fbb710fb0f010e412f32c8f,Metadata:&
ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1738721584289549102,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-88qpw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d59d5c0-03f1-421e-9809-9c010a4a6282,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e79d65a368ef1c6eb02768657bfa8d84264f9e629df72c0e5259231ba4f2c05,PodSandboxId:7276b1d34ea5beaaa768c41eef61c70789c9528e8b8aa05cf7baff3479657a34,Metadata:&ContainerMetadata{Name:storage-pro
visioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1738721584240770674,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08a17d82-c3b6-47c4-9da0-e4b26ec25008,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a254ce2946e24c21d7510fbe9dee57557272200a3dfb5dc0431cc9294b0001ae,PodSandboxId:a464e6f032f6c2be3b648e1dde90f35e393cba4f63880b0cf7bbbb120edf28b5,Metadata:&ContainerMetadata{Name:kube-controller-manager,
Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1738721580503952306,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-910650,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3073fa6296fe10e8ed4ee220ea36bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98af606a04faa861c56f4063ce1b50abba73dbd601d761d4db162ee10ef18a04,PodSandboxId:e10f9b20be5542b570d02712703dedf589ab5a536c0024758620f50bb6eab000,Metadata:&ContainerMetadata{Name:kube-schedul
er,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1738721580450788018,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-910650,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1e39b1c0d320386380cb40ab221bd88,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed785972503d44c9f59a2ad848d64ed8a90dca50320d73ad31d376081ada8940,PodSandboxId:d4280d93cd564ab43e5a7c8f5dd41847ca8dcb091c1bbc677dc4ad7c80e70bed,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&Im
ageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1738721580399622076,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-910650,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 473e65733f7294c10e7cbbb013aae69c,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2b2553cc-2581-441c-bd73-7773728ae36d name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 02:17:32 functional-910650 crio[4189]: time="2025-02-05 02:17:32.856482236Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=31a0543d-3dfd-4acb-9e17-da9d68de3333 name=/runtime.v1.RuntimeService/Version
	Feb 05 02:17:32 functional-910650 crio[4189]: time="2025-02-05 02:17:32.856546029Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=31a0543d-3dfd-4acb-9e17-da9d68de3333 name=/runtime.v1.RuntimeService/Version
	Feb 05 02:17:32 functional-910650 crio[4189]: time="2025-02-05 02:17:32.858192600Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2081ff48-0a03-411f-9c20-30e7e511e503 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 02:17:32 functional-910650 crio[4189]: time="2025-02-05 02:17:32.859081540Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738721852859058240,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:281112,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2081ff48-0a03-411f-9c20-30e7e511e503 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 02:17:32 functional-910650 crio[4189]: time="2025-02-05 02:17:32.859606123Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aa21472c-c9ec-4e1e-a4ef-f1945c0f202f name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 02:17:32 functional-910650 crio[4189]: time="2025-02-05 02:17:32.859672478Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aa21472c-c9ec-4e1e-a4ef-f1945c0f202f name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 02:17:32 functional-910650 crio[4189]: time="2025-02-05 02:17:32.860016016Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:78b5138b875a772a7063d3e2217fe27d2de180bf614e79b7eedf7316847399ab,PodSandboxId:8246285bb99b7966eaa53a9a72b7b60767da17ff4185e39c7716cf1a77976e14,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1738721686243734114,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-58ccfd96bb-dq6dw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef4e7751-e58b-4b47-a709-8c168fcf136a,},Annotations:map[string]string{io.kubernetes.container.hash: a60d665,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"co
ntainerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:644be7f12ed065b2702594d40d3094c1382681f7cb18501a0124174a1cede414,PodSandboxId:f516c9b35bd9e8aa88e15791c68063644d9667461f2b4318f0f98760478c556d,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1738721674011910744,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-mhz29,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7fcf7ac0-
66dc-4a69-b059-9802b365ca98,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b0bde1fadaddb954a00d22b1d87f45f11e1e91c0a68747aeb43ff15aa05c09d,PodSandboxId:83196fc6e47d5c2efbc5480870f8fc55bb2ea34a1cafb9e1133a92f365534bf1,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1738721667255660344,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-
scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-5d59dccf9b-k2kzr,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 17611628-6a3d-40e5-a811-7ee4b4a9ddbd,},Annotations:map[string]string{io.kubernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4c5834ef4945982ac0f60a01869afded58652eec5be22508419d40ad1c93126,PodSandboxId:bf38d2769bbe7afeba4e2e7e981326eede1f9e983e9858091adf8c27ca433b16,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c02
0289c,State:CONTAINER_EXITED,CreatedAt:1738721656086441690,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 953b0a2f-4dbb-4fcc-85c1-f51d97e97e61,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bc6adf7c709b8c964181ce9729ef57bf8e3a9cbc80191d2df3d9707f414ca0d,PodSandboxId:ac4d054666c77e1a962dccba829c547c3525f6ff50fd605e499e63dbeb60c967,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
,State:CONTAINER_RUNNING,CreatedAt:1738721653579188411,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-fcfd88b6f-wrkgl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 86972029-8f91-48c5-8edf-8aa85ed26ca7,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eda8b10fd71d9f21f970daf5dd2c387a2e1936636c723ca133f79ed9e9d66a5b,PodSandboxId:a2ad07eaa16c0bce6c4b562c50d6f2f75a2055ab9fb4af5c35ff05aefa9acc83,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605
a538410,State:CONTAINER_RUNNING,CreatedAt:1738721653489439452,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-58f9cf68d8-k6qhb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5dd71066-2b27-4030-ad19-eec45fdb7bac,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c711ebfd415d07af2dbc8d9e2241d809a614cc97b9cff2fb0354752e86b8917,PodSandboxId:e2561b0a0b91713fe7006e5c03222cbad68defe537fac988a21f58f534712fbb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONT
AINER_RUNNING,CreatedAt:1738721625845869720,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-88qpw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d59d5c0-03f1-421e-9809-9c010a4a6282,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b93939ed04e4f6a26930d5d93af594dda472e1048c13ba6406fb596b56e57968,PodSandboxId:543f0168053f9cb5f374e18a63a6a5dce128efad894afb57b42eda529009154c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:
1738721625823676164,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08a17d82-c3b6-47c4-9da0-e4b26ec25008,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4859144feeb41514390369b071ffaa13ac2f05a9026d9c314320cda965fbef4b,PodSandboxId:bf141d9d4a10c965cb168762a9788e3650fb1bf9f943136bea2fc7cb4d4e8a99,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1738721625849269380,Labe
ls:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-nmxv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 247a5c29-8764-4782-ae98-364cdfde9beb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34b63fc06270bd4f4787320a89c1d86cdcbffbe38a9a0af59cc002a1c6115e53,PodSandboxId:e0a4c9519f8751946dc764e955a7d6f6239683380d7be85479b7427f16b90e5f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,
Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1738721622339757725,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-910650,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 079c3736255cc8d2c02aa70d294c0491,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2186b22be5a178d033842884a0b7b70fbafb2b51c014dcce5c6a661cf2d8200,PodSandboxId:707eed2b9c556ec3d4488ac21d82491f12e8625501c3f43fb525f2de68867e51,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1738721622157664115,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-910650,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3073fa6296fe10e8ed4ee220ea36bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af4e3b95b4a4d42dff82f80e07c3d4757852331255b6759159e88511d44a5bbd,PodSandboxId:221e79efeaacfba9e6a752b92e52696140694347b52b4df726f21786d494439f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,
Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1738721622167165261,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-910650,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1e39b1c0d320386380cb40ab221bd88,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4791a6b52e420c6b24d64b6a1131379826766c8ecc6f2c0b80c430f349589c7,PodSandboxId:00ef679aa96e7a4ac62cbdefbeed369c319b31816c02c0b2f7fce10f9fccbec1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1738721622137491148,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-910650,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 473e65733f7294c10e7cbbb013aae69c,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac20a1598bd6a8684f6eda37ee8319dba42869b26ff7fd278b345bbb801774ce,PodSandboxId:252be47a5347ec3004924811fa4502a3ec3cd9e22ff9e12e3f9d9a5596a34f19,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1738721584722910352,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-nmxv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 247a5c29-8764-4782-ae98-364cdfde9beb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45a1536465e4c28d9ec4134af500dbe93c78da372ecb562b568952278deb87be,PodSandboxId:edffae40827327acbfcf362b6b40ca4ec3ec365c6fbb710fb0f010e412f32c8f,Metadata:&
ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1738721584289549102,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-88qpw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d59d5c0-03f1-421e-9809-9c010a4a6282,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e79d65a368ef1c6eb02768657bfa8d84264f9e629df72c0e5259231ba4f2c05,PodSandboxId:7276b1d34ea5beaaa768c41eef61c70789c9528e8b8aa05cf7baff3479657a34,Metadata:&ContainerMetadata{Name:storage-pro
visioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1738721584240770674,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08a17d82-c3b6-47c4-9da0-e4b26ec25008,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a254ce2946e24c21d7510fbe9dee57557272200a3dfb5dc0431cc9294b0001ae,PodSandboxId:a464e6f032f6c2be3b648e1dde90f35e393cba4f63880b0cf7bbbb120edf28b5,Metadata:&ContainerMetadata{Name:kube-controller-manager,
Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1738721580503952306,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-910650,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3073fa6296fe10e8ed4ee220ea36bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98af606a04faa861c56f4063ce1b50abba73dbd601d761d4db162ee10ef18a04,PodSandboxId:e10f9b20be5542b570d02712703dedf589ab5a536c0024758620f50bb6eab000,Metadata:&ContainerMetadata{Name:kube-schedul
er,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1738721580450788018,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-910650,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1e39b1c0d320386380cb40ab221bd88,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed785972503d44c9f59a2ad848d64ed8a90dca50320d73ad31d376081ada8940,PodSandboxId:d4280d93cd564ab43e5a7c8f5dd41847ca8dcb091c1bbc677dc4ad7c80e70bed,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&Im
ageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1738721580399622076,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-910650,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 473e65733f7294c10e7cbbb013aae69c,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aa21472c-c9ec-4e1e-a4ef-f1945c0f202f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	78b5138b875a7       docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb                  2 minutes ago       Running             mysql                       0                   8246285bb99b7       mysql-58ccfd96bb-dq6dw
	644be7f12ed06       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         2 minutes ago       Running             kubernetes-dashboard        0                   f516c9b35bd9e       kubernetes-dashboard-7779f9b69b-mhz29
	2b0bde1fadadd       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   3 minutes ago       Running             dashboard-metrics-scraper   0                   83196fc6e47d5       dashboard-metrics-scraper-5d59dccf9b-k2kzr
	e4c5834ef4945       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              3 minutes ago       Exited              mount-munger                0                   bf38d2769bbe7       busybox-mount
	5bc6adf7c709b       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               3 minutes ago       Running             echoserver                  0                   ac4d054666c77       hello-node-fcfd88b6f-wrkgl
	eda8b10fd71d9       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               3 minutes ago       Running             echoserver                  0                   a2ad07eaa16c0       hello-node-connect-58f9cf68d8-k6qhb
	4859144feeb41       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 3 minutes ago       Running             coredns                     2                   bf141d9d4a10c       coredns-668d6bf9bc-nmxv8
	8c711ebfd415d       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a                                                 3 minutes ago       Running             kube-proxy                  2                   e2561b0a0b917       kube-proxy-88qpw
	b93939ed04e4f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 3 minutes ago       Running             storage-provisioner         2                   543f0168053f9       storage-provisioner
	34b63fc06270b       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a                                                 3 minutes ago       Running             kube-apiserver              0                   e0a4c9519f875       kube-apiserver-functional-910650
	af4e3b95b4a4d       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1                                                 3 minutes ago       Running             kube-scheduler              2                   221e79efeaacf       kube-scheduler-functional-910650
	c2186b22be5a1       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35                                                 3 minutes ago       Running             kube-controller-manager     2                   707eed2b9c556       kube-controller-manager-functional-910650
	b4791a6b52e42       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                 3 minutes ago       Running             etcd                        2                   00ef679aa96e7       etcd-functional-910650
	ac20a1598bd6a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 4 minutes ago       Exited              coredns                     1                   252be47a5347e       coredns-668d6bf9bc-nmxv8
	45a1536465e4c       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a                                                 4 minutes ago       Exited              kube-proxy                  1                   edffae4082732       kube-proxy-88qpw
	0e79d65a368ef       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 4 minutes ago       Exited              storage-provisioner         1                   7276b1d34ea5b       storage-provisioner
	a254ce2946e24       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35                                                 4 minutes ago       Exited              kube-controller-manager     1                   a464e6f032f6c       kube-controller-manager-functional-910650
	98af606a04faa       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1                                                 4 minutes ago       Exited              kube-scheduler              1                   e10f9b20be554       kube-scheduler-functional-910650
	ed785972503d4       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                 4 minutes ago       Exited              etcd                        1                   d4280d93cd564       etcd-functional-910650
	
	
	==> coredns [4859144feeb41514390369b071ffaa13ac2f05a9026d9c314320cda965fbef4b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:52188 - 19014 "HINFO IN 4700677036473782337.8637134765345876053. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.030226147s
	
	
	==> coredns [ac20a1598bd6a8684f6eda37ee8319dba42869b26ff7fd278b345bbb801774ce] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:44995 - 2878 "HINFO IN 3892042948656509068.4612245750817612743. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020257532s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-910650
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-910650
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e14c106b6cd223b80b8f7425af26284a33f75c3d
	                    minikube.k8s.io/name=functional-910650
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_02_05T02_12_27_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 05 Feb 2025 02:12:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-910650
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 05 Feb 2025 02:17:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 05 Feb 2025 02:15:16 +0000   Wed, 05 Feb 2025 02:12:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 05 Feb 2025 02:15:16 +0000   Wed, 05 Feb 2025 02:12:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 05 Feb 2025 02:15:16 +0000   Wed, 05 Feb 2025 02:12:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 05 Feb 2025 02:15:16 +0000   Wed, 05 Feb 2025 02:12:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.25
	  Hostname:    functional-910650
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 eb254626caea495abd384d139d275fab
	  System UUID:                eb254626-caea-495a-bd38-4d139d275fab
	  Boot ID:                    ea9046ea-a7e2-417d-9f60-d0553501dcee
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-connect-58f9cf68d8-k6qhb           0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m24s
	  default                     hello-node-fcfd88b6f-wrkgl                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m23s
	  default                     mysql-58ccfd96bb-dq6dw                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (18%)    3m10s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m2s
	  kube-system                 coredns-668d6bf9bc-nmxv8                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m1s
	  kube-system                 etcd-functional-910650                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m6s
	  kube-system                 kube-apiserver-functional-910650              250m (12%)    0 (0%)      0 (0%)           0 (0%)         3m48s
	  kube-system                 kube-controller-manager-functional-910650     200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 kube-proxy-88qpw                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	  kube-system                 kube-scheduler-functional-910650              100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m7s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	  kubernetes-dashboard        dashboard-metrics-scraper-5d59dccf9b-k2kzr    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m10s
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-mhz29         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m                     kube-proxy       
	  Normal  Starting                 3m46s                  kube-proxy       
	  Normal  Starting                 4m28s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m6s                   kubelet          Node functional-910650 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  5m6s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m6s                   kubelet          Node functional-910650 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m6s                   kubelet          Node functional-910650 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m6s                   kubelet          Starting kubelet.
	  Normal  NodeReady                5m5s                   kubelet          Node functional-910650 status is now: NodeReady
	  Normal  RegisteredNode           5m2s                   node-controller  Node functional-910650 event: Registered Node functional-910650 in Controller
	  Normal  NodeHasNoDiskPressure    4m34s (x8 over 4m34s)  kubelet          Node functional-910650 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  4m34s (x8 over 4m34s)  kubelet          Node functional-910650 status is now: NodeHasSufficientMemory
	  Normal  Starting                 4m34s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     4m34s (x7 over 4m34s)  kubelet          Node functional-910650 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m27s                  node-controller  Node functional-910650 event: Registered Node functional-910650 in Controller
	  Normal  Starting                 3m52s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m52s (x8 over 3m52s)  kubelet          Node functional-910650 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m52s (x8 over 3m52s)  kubelet          Node functional-910650 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m52s (x7 over 3m52s)  kubelet          Node functional-910650 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m46s                  node-controller  Node functional-910650 event: Registered Node functional-910650 in Controller
	
	
	==> dmesg <==
	[  +0.126499] systemd-fstab-generator[2193]: Ignoring "noauto" option for root device
	[  +0.249005] systemd-fstab-generator[2221]: Ignoring "noauto" option for root device
	[  +8.497597] systemd-fstab-generator[2345]: Ignoring "noauto" option for root device
	[  +0.073537] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.790124] systemd-fstab-generator[2467]: Ignoring "noauto" option for root device
	[Feb 5 02:13] kauditd_printk_skb: 74 callbacks suppressed
	[  +5.795983] kauditd_printk_skb: 37 callbacks suppressed
	[  +9.503663] systemd-fstab-generator[3265]: Ignoring "noauto" option for root device
	[ +18.155779] systemd-fstab-generator[4113]: Ignoring "noauto" option for root device
	[  +0.072422] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.053785] systemd-fstab-generator[4125]: Ignoring "noauto" option for root device
	[  +0.161166] systemd-fstab-generator[4139]: Ignoring "noauto" option for root device
	[  +0.135297] systemd-fstab-generator[4151]: Ignoring "noauto" option for root device
	[  +0.258873] systemd-fstab-generator[4179]: Ignoring "noauto" option for root device
	[  +0.708384] systemd-fstab-generator[4305]: Ignoring "noauto" option for root device
	[  +2.333681] systemd-fstab-generator[4814]: Ignoring "noauto" option for root device
	[  +4.256377] kauditd_printk_skb: 231 callbacks suppressed
	[  +9.250853] kauditd_printk_skb: 12 callbacks suppressed
	[  +3.763249] systemd-fstab-generator[5373]: Ignoring "noauto" option for root device
	[Feb 5 02:14] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.000373] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.889066] kauditd_printk_skb: 41 callbacks suppressed
	[  +7.502551] kauditd_printk_skb: 15 callbacks suppressed
	[  +7.565962] kauditd_printk_skb: 44 callbacks suppressed
	[  +7.233217] kauditd_printk_skb: 18 callbacks suppressed
	
	
	==> etcd [b4791a6b52e420c6b24d64b6a1131379826766c8ecc6f2c0b80c430f349589c7] <==
	{"level":"info","ts":"2025-02-05T02:14:40.065657Z","caller":"traceutil/trace.go:171","msg":"trace[892986851] linearizableReadLoop","detail":"{readStateIndex:909; appliedIndex:908; }","duration":"222.159826ms","start":"2025-02-05T02:14:39.843473Z","end":"2025-02-05T02:14:40.065633Z","steps":["trace[892986851] 'read index received'  (duration: 222.037229ms)","trace[892986851] 'applied index is now lower than readState.Index'  (duration: 122.246µs)"],"step_count":2}
	{"level":"warn","ts":"2025-02-05T02:14:40.065806Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"222.275812ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-05T02:14:40.065838Z","caller":"traceutil/trace.go:171","msg":"trace[1549772304] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:835; }","duration":"222.380395ms","start":"2025-02-05T02:14:39.843450Z","end":"2025-02-05T02:14:40.065830Z","steps":["trace[1549772304] 'agreement among raft nodes before linearized reading'  (duration: 222.264946ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-05T02:14:40.066089Z","caller":"traceutil/trace.go:171","msg":"trace[1875012874] transaction","detail":"{read_only:false; response_revision:835; number_of_response:1; }","duration":"274.377535ms","start":"2025-02-05T02:14:39.791702Z","end":"2025-02-05T02:14:40.066080Z","steps":["trace[1875012874] 'process raft request'  (duration: 273.854974ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-05T02:14:42.592565Z","caller":"traceutil/trace.go:171","msg":"trace[1135397310] linearizableReadLoop","detail":"{readStateIndex:911; appliedIndex:910; }","duration":"175.015634ms","start":"2025-02-05T02:14:42.417535Z","end":"2025-02-05T02:14:42.592551Z","steps":["trace[1135397310] 'read index received'  (duration: 174.825432ms)","trace[1135397310] 'applied index is now lower than readState.Index'  (duration: 189.746µs)"],"step_count":2}
	{"level":"warn","ts":"2025-02-05T02:14:42.592669Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"175.120288ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-05T02:14:42.592685Z","caller":"traceutil/trace.go:171","msg":"trace[359917690] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:836; }","duration":"175.149636ms","start":"2025-02-05T02:14:42.417530Z","end":"2025-02-05T02:14:42.592680Z","steps":["trace[359917690] 'agreement among raft nodes before linearized reading'  (duration: 175.078271ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-05T02:14:42.592793Z","caller":"traceutil/trace.go:171","msg":"trace[1036766448] transaction","detail":"{read_only:false; response_revision:836; number_of_response:1; }","duration":"518.058729ms","start":"2025-02-05T02:14:42.074723Z","end":"2025-02-05T02:14:42.592782Z","steps":["trace[1036766448] 'process raft request'  (duration: 517.685768ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-05T02:14:42.593302Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-05T02:14:42.074705Z","time spent":"518.119202ms","remote":"127.0.0.1:33274","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:835 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2025-02-05T02:14:45.980730Z","caller":"traceutil/trace.go:171","msg":"trace[580217731] linearizableReadLoop","detail":"{readStateIndex:913; appliedIndex:912; }","duration":"272.098641ms","start":"2025-02-05T02:14:45.708617Z","end":"2025-02-05T02:14:45.980716Z","steps":["trace[580217731] 'read index received'  (duration: 272.038592ms)","trace[580217731] 'applied index is now lower than readState.Index'  (duration: 59.614µs)"],"step_count":2}
	{"level":"info","ts":"2025-02-05T02:14:45.980802Z","caller":"traceutil/trace.go:171","msg":"trace[502972105] transaction","detail":"{read_only:false; response_revision:838; number_of_response:1; }","duration":"300.659197ms","start":"2025-02-05T02:14:45.680138Z","end":"2025-02-05T02:14:45.980797Z","steps":["trace[502972105] 'process raft request'  (duration: 300.444576ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-05T02:14:45.980864Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-05T02:14:45.680122Z","time spent":"300.695945ms","remote":"127.0.0.1:33370","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":557,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/functional-910650\" mod_revision:830 > success:<request_put:<key:\"/registry/leases/kube-node-lease/functional-910650\" value_size:499 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/functional-910650\" > >"}
	{"level":"warn","ts":"2025-02-05T02:14:45.980941Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"272.336344ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-05T02:14:45.980955Z","caller":"traceutil/trace.go:171","msg":"trace[285038935] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:838; }","duration":"272.351182ms","start":"2025-02-05T02:14:45.708599Z","end":"2025-02-05T02:14:45.980951Z","steps":["trace[285038935] 'agreement among raft nodes before linearized reading'  (duration: 272.322998ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-05T02:14:45.982317Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.452908ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-05T02:14:45.982397Z","caller":"traceutil/trace.go:171","msg":"trace[739577323] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:839; }","duration":"138.558883ms","start":"2025-02-05T02:14:45.843829Z","end":"2025-02-05T02:14:45.982388Z","steps":["trace[739577323] 'agreement among raft nodes before linearized reading'  (duration: 138.441416ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-05T02:14:45.982531Z","caller":"traceutil/trace.go:171","msg":"trace[1239497870] transaction","detail":"{read_only:false; response_revision:839; number_of_response:1; }","duration":"209.976977ms","start":"2025-02-05T02:14:45.772549Z","end":"2025-02-05T02:14:45.982526Z","steps":["trace[1239497870] 'process raft request'  (duration: 209.676068ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-05T02:14:47.857294Z","caller":"traceutil/trace.go:171","msg":"trace[52432748] linearizableReadLoop","detail":"{readStateIndex:923; appliedIndex:922; }","duration":"439.228304ms","start":"2025-02-05T02:14:47.418051Z","end":"2025-02-05T02:14:47.857279Z","steps":["trace[52432748] 'read index received'  (duration: 439.091465ms)","trace[52432748] 'applied index is now lower than readState.Index'  (duration: 136.426µs)"],"step_count":2}
	{"level":"info","ts":"2025-02-05T02:14:47.857478Z","caller":"traceutil/trace.go:171","msg":"trace[2134246953] transaction","detail":"{read_only:false; response_revision:846; number_of_response:1; }","duration":"728.823662ms","start":"2025-02-05T02:14:47.128646Z","end":"2025-02-05T02:14:47.857470Z","steps":["trace[2134246953] 'process raft request'  (duration: 728.541245ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-05T02:14:47.857531Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"148.510009ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-05T02:14:47.857581Z","caller":"traceutil/trace.go:171","msg":"trace[1573518482] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:846; }","duration":"148.569774ms","start":"2025-02-05T02:14:47.709003Z","end":"2025-02-05T02:14:47.857572Z","steps":["trace[1573518482] 'agreement among raft nodes before linearized reading'  (duration: 148.494236ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-05T02:14:47.857559Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-05T02:14:47.128631Z","time spent":"728.879624ms","remote":"127.0.0.1:33284","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3183,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/default/mysql-58ccfd96bb-dq6dw\" mod_revision:786 > success:<request_put:<key:\"/registry/pods/default/mysql-58ccfd96bb-dq6dw\" value_size:3130 >> failure:<request_range:<key:\"/registry/pods/default/mysql-58ccfd96bb-dq6dw\" > >"}
	{"level":"warn","ts":"2025-02-05T02:14:47.857661Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"439.610511ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-05T02:14:47.857689Z","caller":"traceutil/trace.go:171","msg":"trace[375504740] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:846; }","duration":"439.638259ms","start":"2025-02-05T02:14:47.418046Z","end":"2025-02-05T02:14:47.857684Z","steps":["trace[375504740] 'agreement among raft nodes before linearized reading'  (duration: 439.604525ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-05T02:14:56.193389Z","caller":"traceutil/trace.go:171","msg":"trace[2117920203] transaction","detail":"{read_only:false; response_revision:855; number_of_response:1; }","duration":"128.2334ms","start":"2025-02-05T02:14:56.065138Z","end":"2025-02-05T02:14:56.193371Z","steps":["trace[2117920203] 'process raft request'  (duration: 127.727732ms)"],"step_count":1}
	
	
	==> etcd [ed785972503d44c9f59a2ad848d64ed8a90dca50320d73ad31d376081ada8940] <==
	{"level":"info","ts":"2025-02-05T02:13:01.808475Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46b6e3fd62fd4110 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-02-05T02:13:01.808502Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46b6e3fd62fd4110 received MsgPreVoteResp from 46b6e3fd62fd4110 at term 2"}
	{"level":"info","ts":"2025-02-05T02:13:01.808517Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46b6e3fd62fd4110 became candidate at term 3"}
	{"level":"info","ts":"2025-02-05T02:13:01.808526Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46b6e3fd62fd4110 received MsgVoteResp from 46b6e3fd62fd4110 at term 3"}
	{"level":"info","ts":"2025-02-05T02:13:01.808535Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46b6e3fd62fd4110 became leader at term 3"}
	{"level":"info","ts":"2025-02-05T02:13:01.808542Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 46b6e3fd62fd4110 elected leader 46b6e3fd62fd4110 at term 3"}
	{"level":"info","ts":"2025-02-05T02:13:01.813052Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"46b6e3fd62fd4110","local-member-attributes":"{Name:functional-910650 ClientURLs:[https://192.168.39.25:2379]}","request-path":"/0/members/46b6e3fd62fd4110/attributes","cluster-id":"f5f955826d71045b","publish-timeout":"7s"}
	{"level":"info","ts":"2025-02-05T02:13:01.813110Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-05T02:13:01.813396Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-05T02:13:01.813981Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-05T02:13:01.814551Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-02-05T02:13:01.816878Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-05T02:13:01.822607Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.25:2379"}
	{"level":"info","ts":"2025-02-05T02:13:01.822699Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-02-05T02:13:01.822725Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-02-05T02:13:31.526937Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-02-05T02:13:31.527023Z","caller":"embed/etcd.go:378","msg":"closing etcd server","name":"functional-910650","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.25:2380"],"advertise-client-urls":["https://192.168.39.25:2379"]}
	{"level":"warn","ts":"2025-02-05T02:13:31.527093Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-02-05T02:13:31.527188Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-02-05T02:13:31.578098Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.25:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-02-05T02:13:31.578175Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.25:2379: use of closed network connection"}
	{"level":"info","ts":"2025-02-05T02:13:31.578226Z","caller":"etcdserver/server.go:1543","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"46b6e3fd62fd4110","current-leader-member-id":"46b6e3fd62fd4110"}
	{"level":"info","ts":"2025-02-05T02:13:31.592714Z","caller":"embed/etcd.go:582","msg":"stopping serving peer traffic","address":"192.168.39.25:2380"}
	{"level":"info","ts":"2025-02-05T02:13:31.592903Z","caller":"embed/etcd.go:587","msg":"stopped serving peer traffic","address":"192.168.39.25:2380"}
	{"level":"info","ts":"2025-02-05T02:13:31.592928Z","caller":"embed/etcd.go:380","msg":"closed etcd server","name":"functional-910650","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.25:2380"],"advertise-client-urls":["https://192.168.39.25:2379"]}
	
	
	==> kernel <==
	 02:17:33 up 5 min,  0 users,  load average: 0.95, 0.64, 0.30
	Linux functional-910650 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [34b63fc06270bd4f4787320a89c1d86cdcbffbe38a9a0af59cc002a1c6115e53] <==
	I0205 02:13:44.722553       1 policy_source.go:240] refreshing policies
	I0205 02:13:44.728199       1 shared_informer.go:320] Caches are synced for configmaps
	I0205 02:13:44.728424       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0205 02:13:44.731599       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E0205 02:13:44.738511       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0205 02:13:44.795024       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0205 02:13:45.553936       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0205 02:13:45.631563       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0205 02:13:46.297790       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0205 02:13:46.330771       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0205 02:13:46.360711       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0205 02:13:46.366707       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0205 02:13:48.188754       1 controller.go:615] quota admission added evaluator for: endpoints
	I0205 02:13:48.239206       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0205 02:13:48.339865       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0205 02:14:05.077814       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.107.87.132"}
	I0205 02:14:09.814168       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.109.159.9"}
	I0205 02:14:10.352980       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.97.96.219"}
	I0205 02:14:23.526819       1 controller.go:615] quota admission added evaluator for: namespaces
	I0205 02:14:23.790406       1 alloc.go:330] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.105.181.64"}
	I0205 02:14:23.839715       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.225.243"}
	I0205 02:14:23.888783       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.235.86"}
	E0205 02:14:30.749577       1 conn.go:339] Error on socket receive: read tcp 192.168.39.25:8441->192.168.39.1:42144: use of closed network connection
	E0205 02:14:53.973396       1 conn.go:339] Error on socket receive: read tcp 192.168.39.25:8441->192.168.39.1:47646: use of closed network connection
	E0205 02:14:55.069472       1 conn.go:339] Error on socket receive: read tcp 192.168.39.25:8441->192.168.39.1:47658: use of closed network connection
	
	
	==> kube-controller-manager [a254ce2946e24c21d7510fbe9dee57557272200a3dfb5dc0431cc9294b0001ae] <==
	I0205 02:13:06.234087       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0205 02:13:06.234109       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0205 02:13:06.234114       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0205 02:13:06.234118       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0205 02:13:06.234222       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-910650"
	I0205 02:13:06.236722       1 shared_informer.go:320] Caches are synced for persistent volume
	I0205 02:13:06.238948       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0205 02:13:06.239915       1 shared_informer.go:320] Caches are synced for endpoint
	I0205 02:13:06.241970       1 shared_informer.go:320] Caches are synced for attach detach
	I0205 02:13:06.248404       1 shared_informer.go:320] Caches are synced for PV protection
	I0205 02:13:06.249395       1 shared_informer.go:320] Caches are synced for PVC protection
	I0205 02:13:06.254221       1 shared_informer.go:320] Caches are synced for garbage collector
	I0205 02:13:06.271409       1 shared_informer.go:320] Caches are synced for GC
	I0205 02:13:06.271537       1 shared_informer.go:320] Caches are synced for deployment
	I0205 02:13:06.271423       1 shared_informer.go:320] Caches are synced for taint
	I0205 02:13:06.271701       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0205 02:13:06.271885       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-910650"
	I0205 02:13:06.271982       1 shared_informer.go:320] Caches are synced for disruption
	I0205 02:13:06.272486       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0205 02:13:06.272530       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0205 02:13:06.272877       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0205 02:13:06.278217       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0205 02:13:06.280860       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0205 02:13:09.910659       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="22.525898ms"
	I0205 02:13:09.910784       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="40.684µs"
	
	
	==> kube-controller-manager [c2186b22be5a178d033842884a0b7b70fbafb2b51c014dcce5c6a661cf2d8200] <==
	I0205 02:14:23.709240       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="8.865734ms"
	E0205 02:14:23.709642       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0205 02:14:23.718249       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="13.67191ms"
	E0205 02:14:23.718358       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0205 02:14:23.718558       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="6.767817ms"
	E0205 02:14:23.718670       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0205 02:14:23.743974       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="18.669374ms"
	I0205 02:14:23.767152       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="34.480584ms"
	I0205 02:14:23.789858       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="22.64358ms"
	I0205 02:14:23.790064       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="42.556µs"
	I0205 02:14:23.803038       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="56.622µs"
	I0205 02:14:23.814373       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="70.301544ms"
	I0205 02:14:23.814448       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="33.595µs"
	I0205 02:14:23.878970       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="27.44408ms"
	I0205 02:14:23.911704       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="32.603011ms"
	I0205 02:14:23.911831       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="71.608µs"
	I0205 02:14:23.911883       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="17.627µs"
	I0205 02:14:27.550842       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="11.658991ms"
	I0205 02:14:27.550920       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="40.338µs"
	I0205 02:14:34.619406       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="21.163683ms"
	I0205 02:14:34.620892       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="268.271µs"
	I0205 02:14:46.248848       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-910650"
	I0205 02:14:47.877123       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="16.188305ms"
	I0205 02:14:47.877948       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="31.555µs"
	I0205 02:15:16.902767       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-910650"
	
	
	==> kube-proxy [45a1536465e4c28d9ec4134af500dbe93c78da372ecb562b568952278deb87be] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0205 02:13:04.565669       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0205 02:13:04.577801       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.25"]
	E0205 02:13:04.577878       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0205 02:13:04.627493       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0205 02:13:04.627544       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0205 02:13:04.627568       1 server_linux.go:170] "Using iptables Proxier"
	I0205 02:13:04.629777       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0205 02:13:04.630016       1 server.go:497] "Version info" version="v1.32.1"
	I0205 02:13:04.630042       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0205 02:13:04.632751       1 config.go:199] "Starting service config controller"
	I0205 02:13:04.632796       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0205 02:13:04.632822       1 config.go:105] "Starting endpoint slice config controller"
	I0205 02:13:04.632827       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0205 02:13:04.633295       1 config.go:329] "Starting node config controller"
	I0205 02:13:04.633372       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0205 02:13:04.733987       1 shared_informer.go:320] Caches are synced for service config
	I0205 02:13:04.734073       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0205 02:13:04.734768       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [8c711ebfd415d07af2dbc8d9e2241d809a614cc97b9cff2fb0354752e86b8917] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0205 02:13:46.314561       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0205 02:13:46.326744       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.25"]
	E0205 02:13:46.326891       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0205 02:13:46.381974       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0205 02:13:46.382013       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0205 02:13:46.382036       1 server_linux.go:170] "Using iptables Proxier"
	I0205 02:13:46.384240       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0205 02:13:46.384545       1 server.go:497] "Version info" version="v1.32.1"
	I0205 02:13:46.384566       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0205 02:13:46.386152       1 config.go:199] "Starting service config controller"
	I0205 02:13:46.386202       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0205 02:13:46.386226       1 config.go:105] "Starting endpoint slice config controller"
	I0205 02:13:46.386229       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0205 02:13:46.386754       1 config.go:329] "Starting node config controller"
	I0205 02:13:46.386781       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0205 02:13:46.487024       1 shared_informer.go:320] Caches are synced for node config
	I0205 02:13:46.487126       1 shared_informer.go:320] Caches are synced for service config
	I0205 02:13:46.487150       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [98af606a04faa861c56f4063ce1b50abba73dbd601d761d4db162ee10ef18a04] <==
	I0205 02:13:01.234882       1 serving.go:386] Generated self-signed cert in-memory
	W0205 02:13:02.965098       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0205 02:13:02.965260       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0205 02:13:02.965420       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0205 02:13:02.965452       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0205 02:13:03.000421       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0205 02:13:03.004235       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0205 02:13:03.006386       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0205 02:13:03.010419       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0205 02:13:03.011010       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0205 02:13:03.011102       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0205 02:13:03.111567       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0205 02:13:31.536990       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [af4e3b95b4a4d42dff82f80e07c3d4757852331255b6759159e88511d44a5bbd] <==
	I0205 02:13:42.831212       1 serving.go:386] Generated self-signed cert in-memory
	I0205 02:13:44.736131       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0205 02:13:44.736174       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0205 02:13:44.741056       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0205 02:13:44.741115       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0205 02:13:44.741164       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0205 02:13:44.741187       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0205 02:13:44.741213       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0205 02:13:44.741255       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0205 02:13:44.741084       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0205 02:13:44.741422       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0205 02:13:44.841459       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0205 02:13:44.841515       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0205 02:13:44.842223       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	
	
	==> kubelet <==
	Feb 05 02:16:31 functional-910650 kubelet[4821]: E0205 02:16:31.706234    4821 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738721791705953393,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:281112,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 05 02:16:31 functional-910650 kubelet[4821]: E0205 02:16:31.706536    4821 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738721791705953393,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:281112,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 05 02:16:41 functional-910650 kubelet[4821]: E0205 02:16:41.526503    4821 iptables.go:577] "Could not set up iptables canary" err=<
	Feb 05 02:16:41 functional-910650 kubelet[4821]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 05 02:16:41 functional-910650 kubelet[4821]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 05 02:16:41 functional-910650 kubelet[4821]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 05 02:16:41 functional-910650 kubelet[4821]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 05 02:16:41 functional-910650 kubelet[4821]: E0205 02:16:41.617993    4821 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod473e65733f7294c10e7cbbb013aae69c/crio-d4280d93cd564ab43e5a7c8f5dd41847ca8dcb091c1bbc677dc4ad7c80e70bed: Error finding container d4280d93cd564ab43e5a7c8f5dd41847ca8dcb091c1bbc677dc4ad7c80e70bed: Status 404 returned error can't find the container with id d4280d93cd564ab43e5a7c8f5dd41847ca8dcb091c1bbc677dc4ad7c80e70bed
	Feb 05 02:16:41 functional-910650 kubelet[4821]: E0205 02:16:41.618381    4821 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod08a17d82-c3b6-47c4-9da0-e4b26ec25008/crio-7276b1d34ea5beaaa768c41eef61c70789c9528e8b8aa05cf7baff3479657a34: Error finding container 7276b1d34ea5beaaa768c41eef61c70789c9528e8b8aa05cf7baff3479657a34: Status 404 returned error can't find the container with id 7276b1d34ea5beaaa768c41eef61c70789c9528e8b8aa05cf7baff3479657a34
	Feb 05 02:16:41 functional-910650 kubelet[4821]: E0205 02:16:41.618714    4821 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod247a5c29-8764-4782-ae98-364cdfde9beb/crio-252be47a5347ec3004924811fa4502a3ec3cd9e22ff9e12e3f9d9a5596a34f19: Error finding container 252be47a5347ec3004924811fa4502a3ec3cd9e22ff9e12e3f9d9a5596a34f19: Status 404 returned error can't find the container with id 252be47a5347ec3004924811fa4502a3ec3cd9e22ff9e12e3f9d9a5596a34f19
	Feb 05 02:16:41 functional-910650 kubelet[4821]: E0205 02:16:41.618900    4821 manager.go:1116] Failed to create existing container: /kubepods/burstable/podd3073fa6296fe10e8ed4ee220ea36bc7/crio-a464e6f032f6c2be3b648e1dde90f35e393cba4f63880b0cf7bbbb120edf28b5: Error finding container a464e6f032f6c2be3b648e1dde90f35e393cba4f63880b0cf7bbbb120edf28b5: Status 404 returned error can't find the container with id a464e6f032f6c2be3b648e1dde90f35e393cba4f63880b0cf7bbbb120edf28b5
	Feb 05 02:16:41 functional-910650 kubelet[4821]: E0205 02:16:41.619121    4821 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod5d59d5c0-03f1-421e-9809-9c010a4a6282/crio-edffae40827327acbfcf362b6b40ca4ec3ec365c6fbb710fb0f010e412f32c8f: Error finding container edffae40827327acbfcf362b6b40ca4ec3ec365c6fbb710fb0f010e412f32c8f: Status 404 returned error can't find the container with id edffae40827327acbfcf362b6b40ca4ec3ec365c6fbb710fb0f010e412f32c8f
	Feb 05 02:16:41 functional-910650 kubelet[4821]: E0205 02:16:41.619393    4821 manager.go:1116] Failed to create existing container: /kubepods/burstable/poda1e39b1c0d320386380cb40ab221bd88/crio-e10f9b20be5542b570d02712703dedf589ab5a536c0024758620f50bb6eab000: Error finding container e10f9b20be5542b570d02712703dedf589ab5a536c0024758620f50bb6eab000: Status 404 returned error can't find the container with id e10f9b20be5542b570d02712703dedf589ab5a536c0024758620f50bb6eab000
	Feb 05 02:16:41 functional-910650 kubelet[4821]: E0205 02:16:41.708829    4821 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738721801708382426,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:281112,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 05 02:16:41 functional-910650 kubelet[4821]: E0205 02:16:41.708947    4821 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738721801708382426,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:281112,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 05 02:16:51 functional-910650 kubelet[4821]: E0205 02:16:51.710810    4821 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738721811710551714,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:281112,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 05 02:16:51 functional-910650 kubelet[4821]: E0205 02:16:51.710849    4821 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738721811710551714,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:281112,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 05 02:17:01 functional-910650 kubelet[4821]: E0205 02:17:01.712444    4821 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738721821712030863,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:281112,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 05 02:17:01 functional-910650 kubelet[4821]: E0205 02:17:01.712761    4821 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738721821712030863,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:281112,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 05 02:17:11 functional-910650 kubelet[4821]: E0205 02:17:11.714376    4821 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738721831714042153,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:281112,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 05 02:17:11 functional-910650 kubelet[4821]: E0205 02:17:11.714410    4821 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738721831714042153,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:281112,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 05 02:17:21 functional-910650 kubelet[4821]: E0205 02:17:21.716368    4821 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738721841716039535,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:281112,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 05 02:17:21 functional-910650 kubelet[4821]: E0205 02:17:21.716394    4821 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738721841716039535,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:281112,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 05 02:17:31 functional-910650 kubelet[4821]: E0205 02:17:31.719317    4821 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738721851718801849,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:281112,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 05 02:17:31 functional-910650 kubelet[4821]: E0205 02:17:31.719745    4821 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738721851718801849,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:281112,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> kubernetes-dashboard [644be7f12ed065b2702594d40d3094c1382681f7cb18501a0124174a1cede414] <==
	2025/02/05 02:14:34 Using namespace: kubernetes-dashboard
	2025/02/05 02:14:34 Using in-cluster config to connect to apiserver
	2025/02/05 02:14:34 Using secret token for csrf signing
	2025/02/05 02:14:34 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/02/05 02:14:34 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/02/05 02:14:34 Successful initial request to the apiserver, version: v1.32.1
	2025/02/05 02:14:34 Generating JWE encryption key
	2025/02/05 02:14:34 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/02/05 02:14:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/02/05 02:14:34 Initializing JWE encryption key from synchronized object
	2025/02/05 02:14:34 Creating in-cluster Sidecar client
	2025/02/05 02:14:34 Successful request to sidecar
	2025/02/05 02:14:34 Serving insecurely on HTTP port: 9090
	2025/02/05 02:14:34 Starting overwatch
	
	
	==> storage-provisioner [0e79d65a368ef1c6eb02768657bfa8d84264f9e629df72c0e5259231ba4f2c05] <==
	I0205 02:13:04.366521       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0205 02:13:04.390905       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0205 02:13:04.390968       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0205 02:13:04.414462       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0205 02:13:04.414605       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-910650_505320ed-16ae-4497-b90b-c14477766486!
	I0205 02:13:04.415677       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"61253093-98f2-4533-9d65-e6cd4c8fd582", APIVersion:"v1", ResourceVersion:"399", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-910650_505320ed-16ae-4497-b90b-c14477766486 became leader
	I0205 02:13:04.514877       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-910650_505320ed-16ae-4497-b90b-c14477766486!
	
	
	==> storage-provisioner [b93939ed04e4f6a26930d5d93af594dda472e1048c13ba6406fb596b56e57968] <==
	I0205 02:13:46.076651       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0205 02:13:46.112131       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0205 02:13:46.112381       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0205 02:14:03.546544       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0205 02:14:03.546775       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-910650_1a646532-ba29-419b-bdf5-6fd88ce1e855!
	I0205 02:14:03.547552       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"61253093-98f2-4533-9d65-e6cd4c8fd582", APIVersion:"v1", ResourceVersion:"586", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-910650_1a646532-ba29-419b-bdf5-6fd88ce1e855 became leader
	I0205 02:14:03.648303       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-910650_1a646532-ba29-419b-bdf5-6fd88ce1e855!
	I0205 02:14:15.439726       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0205 02:14:15.441205       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"c81f2ca9-852d-4219-9d39-32123f7ae8d9", APIVersion:"v1", ResourceVersion:"687", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0205 02:14:15.440070       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    8c9288ae-b281-4579-aacb-6a1332159f4d 314 0 2025-02-05 02:12:32 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2025-02-05 02:12:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-c81f2ca9-852d-4219-9d39-32123f7ae8d9 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  c81f2ca9-852d-4219-9d39-32123f7ae8d9 687 0 2025-02-05 02:14:15 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2025-02-05 02:14:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2025-02-05 02:14:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0205 02:14:15.444123       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-c81f2ca9-852d-4219-9d39-32123f7ae8d9" provisioned
	I0205 02:14:15.444228       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0205 02:14:15.444311       1 volume_store.go:212] Trying to save persistentvolume "pvc-c81f2ca9-852d-4219-9d39-32123f7ae8d9"
	I0205 02:14:15.463932       1 volume_store.go:219] persistentvolume "pvc-c81f2ca9-852d-4219-9d39-32123f7ae8d9" saved
	I0205 02:14:15.465109       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"c81f2ca9-852d-4219-9d39-32123f7ae8d9", APIVersion:"v1", ResourceVersion:"687", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-c81f2ca9-852d-4219-9d39-32123f7ae8d9
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-910650 -n functional-910650
helpers_test.go:261: (dbg) Run:  kubectl --context functional-910650 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount sp-pod
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-910650 describe pod busybox-mount sp-pod
helpers_test.go:282: (dbg) kubectl --context functional-910650 describe pod busybox-mount sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-910650/192.168.39.25
	Start Time:       Wed, 05 Feb 2025 02:14:12 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://e4c5834ef4945982ac0f60a01869afded58652eec5be22508419d40ad1c93126
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Wed, 05 Feb 2025 02:14:16 +0000
	      Finished:     Wed, 05 Feb 2025 02:14:16 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4hcrc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-4hcrc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  3m21s  default-scheduler  Successfully assigned default/busybox-mount to functional-910650
	  Normal  Pulling    3m21s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     3m18s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.35s (2.35s including waiting). Image size: 4631262 bytes.
	  Normal  Created    3m18s  kubelet            Created container: mount-munger
	  Normal  Started    3m18s  kubelet            Started container mount-munger
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-910650/192.168.39.25
	Start Time:       Wed, 05 Feb 2025 02:14:31 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9d6fr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-9d6fr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3m2s  default-scheduler  Successfully assigned default/sp-pod to functional-910650

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (204.83s)

                                                
                                    
x
+
TestPreload (289.76s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-375673 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0205 03:00:47.840842   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:01:04.765164   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-375673 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m10.894195087s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-375673 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-375673 image pull gcr.io/k8s-minikube/busybox: (2.321850893s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-375673
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-375673: (1m30.950500378s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-375673 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-375673 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m2.597600004s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-375673 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:629: *** TestPreload FAILED at 2025-02-05 03:04:06.716507742 +0000 UTC m=+3632.627348904
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-375673 -n test-preload-375673
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-375673 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-375673 logs -n 25: (1.015218664s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-794103 ssh -n                                                                 | multinode-794103     | jenkins | v1.35.0 | 05 Feb 25 02:47 UTC | 05 Feb 25 02:47 UTC |
	|         | multinode-794103-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-794103 ssh -n multinode-794103 sudo cat                                       | multinode-794103     | jenkins | v1.35.0 | 05 Feb 25 02:47 UTC | 05 Feb 25 02:47 UTC |
	|         | /home/docker/cp-test_multinode-794103-m03_multinode-794103.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-794103 cp multinode-794103-m03:/home/docker/cp-test.txt                       | multinode-794103     | jenkins | v1.35.0 | 05 Feb 25 02:47 UTC | 05 Feb 25 02:47 UTC |
	|         | multinode-794103-m02:/home/docker/cp-test_multinode-794103-m03_multinode-794103-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-794103 ssh -n                                                                 | multinode-794103     | jenkins | v1.35.0 | 05 Feb 25 02:47 UTC | 05 Feb 25 02:47 UTC |
	|         | multinode-794103-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-794103 ssh -n multinode-794103-m02 sudo cat                                   | multinode-794103     | jenkins | v1.35.0 | 05 Feb 25 02:47 UTC | 05 Feb 25 02:47 UTC |
	|         | /home/docker/cp-test_multinode-794103-m03_multinode-794103-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-794103 node stop m03                                                          | multinode-794103     | jenkins | v1.35.0 | 05 Feb 25 02:47 UTC | 05 Feb 25 02:47 UTC |
	| node    | multinode-794103 node start                                                             | multinode-794103     | jenkins | v1.35.0 | 05 Feb 25 02:47 UTC | 05 Feb 25 02:47 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-794103                                                                | multinode-794103     | jenkins | v1.35.0 | 05 Feb 25 02:47 UTC |                     |
	| stop    | -p multinode-794103                                                                     | multinode-794103     | jenkins | v1.35.0 | 05 Feb 25 02:47 UTC | 05 Feb 25 02:50 UTC |
	| start   | -p multinode-794103                                                                     | multinode-794103     | jenkins | v1.35.0 | 05 Feb 25 02:50 UTC | 05 Feb 25 02:53 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-794103                                                                | multinode-794103     | jenkins | v1.35.0 | 05 Feb 25 02:53 UTC |                     |
	| node    | multinode-794103 node delete                                                            | multinode-794103     | jenkins | v1.35.0 | 05 Feb 25 02:53 UTC | 05 Feb 25 02:53 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-794103 stop                                                                   | multinode-794103     | jenkins | v1.35.0 | 05 Feb 25 02:53 UTC | 05 Feb 25 02:56 UTC |
	| start   | -p multinode-794103                                                                     | multinode-794103     | jenkins | v1.35.0 | 05 Feb 25 02:56 UTC | 05 Feb 25 02:58 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-794103                                                                | multinode-794103     | jenkins | v1.35.0 | 05 Feb 25 02:58 UTC |                     |
	| start   | -p multinode-794103-m02                                                                 | multinode-794103-m02 | jenkins | v1.35.0 | 05 Feb 25 02:58 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-794103-m03                                                                 | multinode-794103-m03 | jenkins | v1.35.0 | 05 Feb 25 02:58 UTC | 05 Feb 25 02:59 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-794103                                                                 | multinode-794103     | jenkins | v1.35.0 | 05 Feb 25 02:59 UTC |                     |
	| delete  | -p multinode-794103-m03                                                                 | multinode-794103-m03 | jenkins | v1.35.0 | 05 Feb 25 02:59 UTC | 05 Feb 25 02:59 UTC |
	| delete  | -p multinode-794103                                                                     | multinode-794103     | jenkins | v1.35.0 | 05 Feb 25 02:59 UTC | 05 Feb 25 02:59 UTC |
	| start   | -p test-preload-375673                                                                  | test-preload-375673  | jenkins | v1.35.0 | 05 Feb 25 02:59 UTC | 05 Feb 25 03:01 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-375673 image pull                                                          | test-preload-375673  | jenkins | v1.35.0 | 05 Feb 25 03:01 UTC | 05 Feb 25 03:01 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-375673                                                                  | test-preload-375673  | jenkins | v1.35.0 | 05 Feb 25 03:01 UTC | 05 Feb 25 03:03 UTC |
	| start   | -p test-preload-375673                                                                  | test-preload-375673  | jenkins | v1.35.0 | 05 Feb 25 03:03 UTC | 05 Feb 25 03:04 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-375673 image list                                                          | test-preload-375673  | jenkins | v1.35.0 | 05 Feb 25 03:04 UTC | 05 Feb 25 03:04 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/05 03:03:03
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0205 03:03:03.955856   52158 out.go:345] Setting OutFile to fd 1 ...
	I0205 03:03:03.956421   52158 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 03:03:03.956440   52158 out.go:358] Setting ErrFile to fd 2...
	I0205 03:03:03.956447   52158 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 03:03:03.956871   52158 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20363-12788/.minikube/bin
	I0205 03:03:03.957829   52158 out.go:352] Setting JSON to false
	I0205 03:03:03.958756   52158 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6335,"bootTime":1738718249,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0205 03:03:03.958848   52158 start.go:139] virtualization: kvm guest
	I0205 03:03:03.960860   52158 out.go:177] * [test-preload-375673] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0205 03:03:03.962326   52158 out.go:177]   - MINIKUBE_LOCATION=20363
	I0205 03:03:03.962337   52158 notify.go:220] Checking for updates...
	I0205 03:03:03.964865   52158 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0205 03:03:03.965992   52158 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20363-12788/kubeconfig
	I0205 03:03:03.966990   52158 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20363-12788/.minikube
	I0205 03:03:03.967983   52158 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0205 03:03:03.968966   52158 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0205 03:03:03.970407   52158 config.go:182] Loaded profile config "test-preload-375673": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0205 03:03:03.970827   52158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:03:03.970888   52158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:03:03.985664   52158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45225
	I0205 03:03:03.986228   52158 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:03:03.986858   52158 main.go:141] libmachine: Using API Version  1
	I0205 03:03:03.986887   52158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:03:03.987208   52158 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:03:03.987378   52158 main.go:141] libmachine: (test-preload-375673) Calling .DriverName
	I0205 03:03:03.989030   52158 out.go:177] * Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	I0205 03:03:03.990214   52158 driver.go:394] Setting default libvirt URI to qemu:///system
	I0205 03:03:03.990567   52158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:03:03.990610   52158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:03:04.005235   52158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41007
	I0205 03:03:04.005764   52158 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:03:04.006232   52158 main.go:141] libmachine: Using API Version  1
	I0205 03:03:04.006255   52158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:03:04.006558   52158 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:03:04.006731   52158 main.go:141] libmachine: (test-preload-375673) Calling .DriverName
	I0205 03:03:04.042722   52158 out.go:177] * Using the kvm2 driver based on existing profile
	I0205 03:03:04.043809   52158 start.go:297] selected driver: kvm2
	I0205 03:03:04.043824   52158 start.go:901] validating driver "kvm2" against &{Name:test-preload-375673 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-375673
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0205 03:03:04.043936   52158 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0205 03:03:04.044654   52158 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0205 03:03:04.044733   52158 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20363-12788/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0205 03:03:04.059952   52158 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0205 03:03:04.060327   52158 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0205 03:03:04.060357   52158 cni.go:84] Creating CNI manager for ""
	I0205 03:03:04.060397   52158 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0205 03:03:04.060457   52158 start.go:340] cluster config:
	{Name:test-preload-375673 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-375673 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0205 03:03:04.060564   52158 iso.go:125] acquiring lock: {Name:mk486603e8d6546d81b9e7d0a893261360630790 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0205 03:03:04.062997   52158 out.go:177] * Starting "test-preload-375673" primary control-plane node in "test-preload-375673" cluster
	I0205 03:03:04.064175   52158 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0205 03:03:04.090442   52158 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0205 03:03:04.090482   52158 cache.go:56] Caching tarball of preloaded images
	I0205 03:03:04.090642   52158 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0205 03:03:04.092110   52158 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0205 03:03:04.093169   52158 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0205 03:03:04.120644   52158 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/20363-12788/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0205 03:03:09.767277   52158 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0205 03:03:09.767374   52158 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20363-12788/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0205 03:03:10.609827   52158 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0205 03:03:10.609951   52158 profile.go:143] Saving config to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/test-preload-375673/config.json ...
	I0205 03:03:10.610205   52158 start.go:360] acquireMachinesLock for test-preload-375673: {Name:mka859d8706e94e04a549fcebb98cdac86bfe5a9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0205 03:03:10.610266   52158 start.go:364] duration metric: took 41.138µs to acquireMachinesLock for "test-preload-375673"
	I0205 03:03:10.610291   52158 start.go:96] Skipping create...Using existing machine configuration
	I0205 03:03:10.610300   52158 fix.go:54] fixHost starting: 
	I0205 03:03:10.610619   52158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:03:10.610667   52158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:03:10.625093   52158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40235
	I0205 03:03:10.625606   52158 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:03:10.626026   52158 main.go:141] libmachine: Using API Version  1
	I0205 03:03:10.626047   52158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:03:10.626400   52158 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:03:10.626584   52158 main.go:141] libmachine: (test-preload-375673) Calling .DriverName
	I0205 03:03:10.626712   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetState
	I0205 03:03:10.628227   52158 fix.go:112] recreateIfNeeded on test-preload-375673: state=Stopped err=<nil>
	I0205 03:03:10.628265   52158 main.go:141] libmachine: (test-preload-375673) Calling .DriverName
	W0205 03:03:10.628417   52158 fix.go:138] unexpected machine state, will restart: <nil>
	I0205 03:03:10.630867   52158 out.go:177] * Restarting existing kvm2 VM for "test-preload-375673" ...
	I0205 03:03:10.631937   52158 main.go:141] libmachine: (test-preload-375673) Calling .Start
	I0205 03:03:10.632131   52158 main.go:141] libmachine: (test-preload-375673) starting domain...
	I0205 03:03:10.632152   52158 main.go:141] libmachine: (test-preload-375673) ensuring networks are active...
	I0205 03:03:10.632837   52158 main.go:141] libmachine: (test-preload-375673) Ensuring network default is active
	I0205 03:03:10.633165   52158 main.go:141] libmachine: (test-preload-375673) Ensuring network mk-test-preload-375673 is active
	I0205 03:03:10.633539   52158 main.go:141] libmachine: (test-preload-375673) getting domain XML...
	I0205 03:03:10.634221   52158 main.go:141] libmachine: (test-preload-375673) creating domain...
	I0205 03:03:11.814831   52158 main.go:141] libmachine: (test-preload-375673) waiting for IP...
	I0205 03:03:11.815874   52158 main.go:141] libmachine: (test-preload-375673) DBG | domain test-preload-375673 has defined MAC address 52:54:00:51:6e:cf in network mk-test-preload-375673
	I0205 03:03:11.816269   52158 main.go:141] libmachine: (test-preload-375673) DBG | unable to find current IP address of domain test-preload-375673 in network mk-test-preload-375673
	I0205 03:03:11.816390   52158 main.go:141] libmachine: (test-preload-375673) DBG | I0205 03:03:11.816292   52210 retry.go:31] will retry after 252.017543ms: waiting for domain to come up
	I0205 03:03:12.069907   52158 main.go:141] libmachine: (test-preload-375673) DBG | domain test-preload-375673 has defined MAC address 52:54:00:51:6e:cf in network mk-test-preload-375673
	I0205 03:03:12.070358   52158 main.go:141] libmachine: (test-preload-375673) DBG | unable to find current IP address of domain test-preload-375673 in network mk-test-preload-375673
	I0205 03:03:12.070403   52158 main.go:141] libmachine: (test-preload-375673) DBG | I0205 03:03:12.070331   52210 retry.go:31] will retry after 279.399791ms: waiting for domain to come up
	I0205 03:03:12.351993   52158 main.go:141] libmachine: (test-preload-375673) DBG | domain test-preload-375673 has defined MAC address 52:54:00:51:6e:cf in network mk-test-preload-375673
	I0205 03:03:12.352515   52158 main.go:141] libmachine: (test-preload-375673) DBG | unable to find current IP address of domain test-preload-375673 in network mk-test-preload-375673
	I0205 03:03:12.352538   52158 main.go:141] libmachine: (test-preload-375673) DBG | I0205 03:03:12.352465   52210 retry.go:31] will retry after 294.307913ms: waiting for domain to come up
	I0205 03:03:12.647886   52158 main.go:141] libmachine: (test-preload-375673) DBG | domain test-preload-375673 has defined MAC address 52:54:00:51:6e:cf in network mk-test-preload-375673
	I0205 03:03:12.648330   52158 main.go:141] libmachine: (test-preload-375673) DBG | unable to find current IP address of domain test-preload-375673 in network mk-test-preload-375673
	I0205 03:03:12.648361   52158 main.go:141] libmachine: (test-preload-375673) DBG | I0205 03:03:12.648309   52210 retry.go:31] will retry after 410.721312ms: waiting for domain to come up
	I0205 03:03:13.060908   52158 main.go:141] libmachine: (test-preload-375673) DBG | domain test-preload-375673 has defined MAC address 52:54:00:51:6e:cf in network mk-test-preload-375673
	I0205 03:03:13.061357   52158 main.go:141] libmachine: (test-preload-375673) DBG | unable to find current IP address of domain test-preload-375673 in network mk-test-preload-375673
	I0205 03:03:13.061387   52158 main.go:141] libmachine: (test-preload-375673) DBG | I0205 03:03:13.061313   52210 retry.go:31] will retry after 708.533806ms: waiting for domain to come up
	I0205 03:03:13.771430   52158 main.go:141] libmachine: (test-preload-375673) DBG | domain test-preload-375673 has defined MAC address 52:54:00:51:6e:cf in network mk-test-preload-375673
	I0205 03:03:13.771850   52158 main.go:141] libmachine: (test-preload-375673) DBG | unable to find current IP address of domain test-preload-375673 in network mk-test-preload-375673
	I0205 03:03:13.771888   52158 main.go:141] libmachine: (test-preload-375673) DBG | I0205 03:03:13.771839   52210 retry.go:31] will retry after 643.941634ms: waiting for domain to come up
	I0205 03:03:14.418057   52158 main.go:141] libmachine: (test-preload-375673) DBG | domain test-preload-375673 has defined MAC address 52:54:00:51:6e:cf in network mk-test-preload-375673
	I0205 03:03:14.418522   52158 main.go:141] libmachine: (test-preload-375673) DBG | unable to find current IP address of domain test-preload-375673 in network mk-test-preload-375673
	I0205 03:03:14.418553   52158 main.go:141] libmachine: (test-preload-375673) DBG | I0205 03:03:14.418480   52210 retry.go:31] will retry after 1.088659644s: waiting for domain to come up
	I0205 03:03:15.508575   52158 main.go:141] libmachine: (test-preload-375673) DBG | domain test-preload-375673 has defined MAC address 52:54:00:51:6e:cf in network mk-test-preload-375673
	I0205 03:03:15.508976   52158 main.go:141] libmachine: (test-preload-375673) DBG | unable to find current IP address of domain test-preload-375673 in network mk-test-preload-375673
	I0205 03:03:15.508998   52158 main.go:141] libmachine: (test-preload-375673) DBG | I0205 03:03:15.508953   52210 retry.go:31] will retry after 1.235640677s: waiting for domain to come up
	I0205 03:03:16.746400   52158 main.go:141] libmachine: (test-preload-375673) DBG | domain test-preload-375673 has defined MAC address 52:54:00:51:6e:cf in network mk-test-preload-375673
	I0205 03:03:16.746850   52158 main.go:141] libmachine: (test-preload-375673) DBG | unable to find current IP address of domain test-preload-375673 in network mk-test-preload-375673
	I0205 03:03:16.746883   52158 main.go:141] libmachine: (test-preload-375673) DBG | I0205 03:03:16.746804   52210 retry.go:31] will retry after 1.754549387s: waiting for domain to come up
	I0205 03:03:18.502579   52158 main.go:141] libmachine: (test-preload-375673) DBG | domain test-preload-375673 has defined MAC address 52:54:00:51:6e:cf in network mk-test-preload-375673
	I0205 03:03:18.503017   52158 main.go:141] libmachine: (test-preload-375673) DBG | unable to find current IP address of domain test-preload-375673 in network mk-test-preload-375673
	I0205 03:03:18.503048   52158 main.go:141] libmachine: (test-preload-375673) DBG | I0205 03:03:18.502974   52210 retry.go:31] will retry after 2.187098437s: waiting for domain to come up
	I0205 03:03:20.691961   52158 main.go:141] libmachine: (test-preload-375673) DBG | domain test-preload-375673 has defined MAC address 52:54:00:51:6e:cf in network mk-test-preload-375673
	I0205 03:03:20.692455   52158 main.go:141] libmachine: (test-preload-375673) DBG | unable to find current IP address of domain test-preload-375673 in network mk-test-preload-375673
	I0205 03:03:20.692484   52158 main.go:141] libmachine: (test-preload-375673) DBG | I0205 03:03:20.692422   52210 retry.go:31] will retry after 1.948402108s: waiting for domain to come up
	I0205 03:03:22.642038   52158 main.go:141] libmachine: (test-preload-375673) DBG | domain test-preload-375673 has defined MAC address 52:54:00:51:6e:cf in network mk-test-preload-375673
	I0205 03:03:22.642429   52158 main.go:141] libmachine: (test-preload-375673) DBG | unable to find current IP address of domain test-preload-375673 in network mk-test-preload-375673
	I0205 03:03:22.642451   52158 main.go:141] libmachine: (test-preload-375673) DBG | I0205 03:03:22.642422   52210 retry.go:31] will retry after 3.634370858s: waiting for domain to come up
	I0205 03:03:26.277868   52158 main.go:141] libmachine: (test-preload-375673) DBG | domain test-preload-375673 has defined MAC address 52:54:00:51:6e:cf in network mk-test-preload-375673
	I0205 03:03:26.278264   52158 main.go:141] libmachine: (test-preload-375673) DBG | unable to find current IP address of domain test-preload-375673 in network mk-test-preload-375673
	I0205 03:03:26.278284   52158 main.go:141] libmachine: (test-preload-375673) DBG | I0205 03:03:26.278240   52210 retry.go:31] will retry after 3.603698318s: waiting for domain to come up
	I0205 03:03:29.886021   52158 main.go:141] libmachine: (test-preload-375673) DBG | domain test-preload-375673 has defined MAC address 52:54:00:51:6e:cf in network mk-test-preload-375673
	I0205 03:03:29.886480   52158 main.go:141] libmachine: (test-preload-375673) found domain IP: 192.168.39.150
	I0205 03:03:29.886504   52158 main.go:141] libmachine: (test-preload-375673) reserving static IP address...
	I0205 03:03:29.886527   52158 main.go:141] libmachine: (test-preload-375673) DBG | domain test-preload-375673 has current primary IP address 192.168.39.150 and MAC address 52:54:00:51:6e:cf in network mk-test-preload-375673
	I0205 03:03:29.886973   52158 main.go:141] libmachine: (test-preload-375673) DBG | found host DHCP lease matching {name: "test-preload-375673", mac: "52:54:00:51:6e:cf", ip: "192.168.39.150"} in network mk-test-preload-375673: {Iface:virbr1 ExpiryTime:2025-02-05 03:59:34 +0000 UTC Type:0 Mac:52:54:00:51:6e:cf Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-375673 Clientid:01:52:54:00:51:6e:cf}
	I0205 03:03:29.887004   52158 main.go:141] libmachine: (test-preload-375673) reserved static IP address 192.168.39.150 for domain test-preload-375673
	I0205 03:03:29.887016   52158 main.go:141] libmachine: (test-preload-375673) DBG | skip adding static IP to network mk-test-preload-375673 - found existing host DHCP lease matching {name: "test-preload-375673", mac: "52:54:00:51:6e:cf", ip: "192.168.39.150"}
	I0205 03:03:29.887029   52158 main.go:141] libmachine: (test-preload-375673) waiting for SSH...
	I0205 03:03:29.887046   52158 main.go:141] libmachine: (test-preload-375673) DBG | Getting to WaitForSSH function...
	I0205 03:03:29.888922   52158 main.go:141] libmachine: (test-preload-375673) DBG | domain test-preload-375673 has defined MAC address 52:54:00:51:6e:cf in network mk-test-preload-375673
	I0205 03:03:29.889272   52158 main.go:141] libmachine: (test-preload-375673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:6e:cf", ip: ""} in network mk-test-preload-375673: {Iface:virbr1 ExpiryTime:2025-02-05 03:59:34 +0000 UTC Type:0 Mac:52:54:00:51:6e:cf Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-375673 Clientid:01:52:54:00:51:6e:cf}
	I0205 03:03:29.889304   52158 main.go:141] libmachine: (test-preload-375673) DBG | domain test-preload-375673 has defined IP address 192.168.39.150 and MAC address 52:54:00:51:6e:cf in network mk-test-preload-375673
	I0205 03:03:29.889412   52158 main.go:141] libmachine: (test-preload-375673) DBG | Using SSH client type: external
	I0205 03:03:29.889432   52158 main.go:141] libmachine: (test-preload-375673) DBG | Using SSH private key: /home/jenkins/minikube-integration/20363-12788/.minikube/machines/test-preload-375673/id_rsa (-rw-------)
	I0205 03:03:29.889489   52158 main.go:141] libmachine: (test-preload-375673) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.150 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20363-12788/.minikube/machines/test-preload-375673/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0205 03:03:29.889504   52158 main.go:141] libmachine: (test-preload-375673) DBG | About to run SSH command:
	I0205 03:03:29.889517   52158 main.go:141] libmachine: (test-preload-375673) DBG | exit 0
	I0205 03:03:30.009284   52158 main.go:141] libmachine: (test-preload-375673) DBG | SSH cmd err, output: <nil>: 
	I0205 03:03:30.009718   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetConfigRaw
	I0205 03:03:30.010402   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetIP
	I0205 03:03:30.012820   52158 main.go:141] libmachine: (test-preload-375673) DBG | domain test-preload-375673 has defined MAC address 52:54:00:51:6e:cf in network mk-test-preload-375673
	I0205 03:03:30.013179   52158 main.go:141] libmachine: (test-preload-375673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:6e:cf", ip: ""} in network mk-test-preload-375673: {Iface:virbr1 ExpiryTime:2025-02-05 03:59:34 +0000 UTC Type:0 Mac:52:54:00:51:6e:cf Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-375673 Clientid:01:52:54:00:51:6e:cf}
	I0205 03:03:30.013207   52158 main.go:141] libmachine: (test-preload-375673) DBG | domain test-preload-375673 has defined IP address 192.168.39.150 and MAC address 52:54:00:51:6e:cf in network mk-test-preload-375673
	I0205 03:03:30.013453   52158 profile.go:143] Saving config to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/test-preload-375673/config.json ...
	I0205 03:03:30.013699   52158 machine.go:93] provisionDockerMachine start ...
	I0205 03:03:30.013720   52158 main.go:141] libmachine: (test-preload-375673) Calling .DriverName
	I0205 03:03:30.013928   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetSSHHostname
	I0205 03:03:30.016049   52158 main.go:141] libmachine: (test-preload-375673) DBG | domain test-preload-375673 has defined MAC address 52:54:00:51:6e:cf in network mk-test-preload-375673
	I0205 03:03:30.016392   52158 main.go:141] libmachine: (test-preload-375673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:6e:cf", ip: ""} in network mk-test-preload-375673: {Iface:virbr1 ExpiryTime:2025-02-05 03:59:34 +0000 UTC Type:0 Mac:52:54:00:51:6e:cf Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-375673 Clientid:01:52:54:00:51:6e:cf}
	I0205 03:03:30.016416   52158 main.go:141] libmachine: (test-preload-375673) DBG | domain test-preload-375673 has defined IP address 192.168.39.150 and MAC address 52:54:00:51:6e:cf in network mk-test-preload-375673
	I0205 03:03:30.016539   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetSSHPort
	I0205 03:03:30.016729   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetSSHKeyPath
	I0205 03:03:30.016848   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetSSHKeyPath
	I0205 03:03:30.016970   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetSSHUsername
	I0205 03:03:30.017083   52158 main.go:141] libmachine: Using SSH client type: native
	I0205 03:03:30.017302   52158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0205 03:03:30.017312   52158 main.go:141] libmachine: About to run SSH command:
	hostname
	I0205 03:03:30.113397   52158 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0205 03:03:30.113431   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetMachineName
	I0205 03:03:30.113719   52158 buildroot.go:166] provisioning hostname "test-preload-375673"
	I0205 03:03:30.113751   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetMachineName
	I0205 03:03:30.113998   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetSSHHostname
	I0205 03:03:30.116591   52158 main.go:141] libmachine: (test-preload-375673) DBG | domain test-preload-375673 has defined MAC address 52:54:00:51:6e:cf in network mk-test-preload-375673
	I0205 03:03:30.116920   52158 main.go:141] libmachine: (test-preload-375673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:6e:cf", ip: ""} in network mk-test-preload-375673: {Iface:virbr1 ExpiryTime:2025-02-05 03:59:34 +0000 UTC Type:0 Mac:52:54:00:51:6e:cf Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-375673 Clientid:01:52:54:00:51:6e:cf}
	I0205 03:03:30.116952   52158 main.go:141] libmachine: (test-preload-375673) DBG | domain test-preload-375673 has defined IP address 192.168.39.150 and MAC address 52:54:00:51:6e:cf in network mk-test-preload-375673
	I0205 03:03:30.117081   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetSSHPort
	I0205 03:03:30.117251   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetSSHKeyPath
	I0205 03:03:30.117412   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetSSHKeyPath
	I0205 03:03:30.117545   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetSSHUsername
	I0205 03:03:30.117699   52158 main.go:141] libmachine: Using SSH client type: native
	I0205 03:03:30.117866   52158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0205 03:03:30.117878   52158 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-375673 && echo "test-preload-375673" | sudo tee /etc/hostname
	I0205 03:03:30.226284   52158 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-375673
	
	I0205 03:03:30.226344   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetSSHHostname
	I0205 03:03:30.228927   52158 main.go:141] libmachine: (test-preload-375673) DBG | domain test-preload-375673 has defined MAC address 52:54:00:51:6e:cf in network mk-test-preload-375673
	I0205 03:03:30.229295   52158 main.go:141] libmachine: (test-preload-375673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:6e:cf", ip: ""} in network mk-test-preload-375673: {Iface:virbr1 ExpiryTime:2025-02-05 03:59:34 +0000 UTC Type:0 Mac:52:54:00:51:6e:cf Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-375673 Clientid:01:52:54:00:51:6e:cf}
	I0205 03:03:30.229330   52158 main.go:141] libmachine: (test-preload-375673) DBG | domain test-preload-375673 has defined IP address 192.168.39.150 and MAC address 52:54:00:51:6e:cf in network mk-test-preload-375673
	I0205 03:03:30.229512   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetSSHPort
	I0205 03:03:30.229672   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetSSHKeyPath
	I0205 03:03:30.229804   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetSSHKeyPath
	I0205 03:03:30.229912   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetSSHUsername
	I0205 03:03:30.230033   52158 main.go:141] libmachine: Using SSH client type: native
	I0205 03:03:30.230206   52158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0205 03:03:30.230221   52158 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-375673' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-375673/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-375673' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0205 03:03:30.333591   52158 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0205 03:03:30.333627   52158 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20363-12788/.minikube CaCertPath:/home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20363-12788/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20363-12788/.minikube}
	I0205 03:03:30.333669   52158 buildroot.go:174] setting up certificates
	I0205 03:03:30.333679   52158 provision.go:84] configureAuth start
	I0205 03:03:30.333690   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetMachineName
	I0205 03:03:30.333997   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetIP
	I0205 03:03:30.336487   52158 main.go:141] libmachine: (test-preload-375673) DBG | domain test-preload-375673 has defined MAC address 52:54:00:51:6e:cf in network mk-test-preload-375673
	I0205 03:03:30.336879   52158 main.go:141] libmachine: (test-preload-375673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:6e:cf", ip: ""} in network mk-test-preload-375673: {Iface:virbr1 ExpiryTime:2025-02-05 03:59:34 +0000 UTC Type:0 Mac:52:54:00:51:6e:cf Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-375673 Clientid:01:52:54:00:51:6e:cf}
	I0205 03:03:30.336918   52158 main.go:141] libmachine: (test-preload-375673) DBG | domain test-preload-375673 has defined IP address 192.168.39.150 and MAC address 52:54:00:51:6e:cf in network mk-test-preload-375673
	I0205 03:03:30.337046   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetSSHHostname
	I0205 03:03:30.339295   52158 main.go:141] libmachine: (test-preload-375673) DBG | domain test-preload-375673 has defined MAC address 52:54:00:51:6e:cf in network mk-test-preload-375673
	I0205 03:03:30.339595   52158 main.go:141] libmachine: (test-preload-375673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:6e:cf", ip: ""} in network mk-test-preload-375673: {Iface:virbr1 ExpiryTime:2025-02-05 03:59:34 +0000 UTC Type:0 Mac:52:54:00:51:6e:cf Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-375673 Clientid:01:52:54:00:51:6e:cf}
	I0205 03:03:30.339650   52158 main.go:141] libmachine: (test-preload-375673) DBG | domain test-preload-375673 has defined IP address 192.168.39.150 and MAC address 52:54:00:51:6e:cf in network mk-test-preload-375673
	I0205 03:03:30.339760   52158 provision.go:143] copyHostCerts
	I0205 03:03:30.339821   52158 exec_runner.go:144] found /home/jenkins/minikube-integration/20363-12788/.minikube/ca.pem, removing ...
	I0205 03:03:30.339834   52158 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20363-12788/.minikube/ca.pem
	I0205 03:03:30.339899   52158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20363-12788/.minikube/ca.pem (1082 bytes)
	I0205 03:03:30.339978   52158 exec_runner.go:144] found /home/jenkins/minikube-integration/20363-12788/.minikube/cert.pem, removing ...
	I0205 03:03:30.339983   52158 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20363-12788/.minikube/cert.pem
	I0205 03:03:30.340006   52158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20363-12788/.minikube/cert.pem (1123 bytes)
	I0205 03:03:30.340056   52158 exec_runner.go:144] found /home/jenkins/minikube-integration/20363-12788/.minikube/key.pem, removing ...
	I0205 03:03:30.340064   52158 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20363-12788/.minikube/key.pem
	I0205 03:03:30.340083   52158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20363-12788/.minikube/key.pem (1675 bytes)
	I0205 03:03:30.340129   52158 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20363-12788/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca-key.pem org=jenkins.test-preload-375673 san=[127.0.0.1 192.168.39.150 localhost minikube test-preload-375673]
	I0205 03:03:30.487641   52158 provision.go:177] copyRemoteCerts
	I0205 03:03:30.487699   52158 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0205 03:03:30.487721   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetSSHHostname
	I0205 03:03:30.490457   52158 main.go:141] libmachine: (test-preload-375673) DBG | domain test-preload-375673 has defined MAC address 52:54:00:51:6e:cf in network mk-test-preload-375673
	I0205 03:03:30.490710   52158 main.go:141] libmachine: (test-preload-375673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:6e:cf", ip: ""} in network mk-test-preload-375673: {Iface:virbr1 ExpiryTime:2025-02-05 03:59:34 +0000 UTC Type:0 Mac:52:54:00:51:6e:cf Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-375673 Clientid:01:52:54:00:51:6e:cf}
	I0205 03:03:30.490734   52158 main.go:141] libmachine: (test-preload-375673) DBG | domain test-preload-375673 has defined IP address 192.168.39.150 and MAC address 52:54:00:51:6e:cf in network mk-test-preload-375673
	I0205 03:03:30.490907   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetSSHPort
	I0205 03:03:30.491091   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetSSHKeyPath
	I0205 03:03:30.491234   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetSSHUsername
	I0205 03:03:30.491338   52158 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/test-preload-375673/id_rsa Username:docker}
	I0205 03:03:30.570966   52158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0205 03:03:30.593744   52158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0205 03:03:30.616514   52158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0205 03:03:30.639239   52158 provision.go:87] duration metric: took 305.548867ms to configureAuth
	I0205 03:03:30.639266   52158 buildroot.go:189] setting minikube options for container-runtime
	I0205 03:03:30.639418   52158 config.go:182] Loaded profile config "test-preload-375673": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0205 03:03:30.639489   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetSSHHostname
	I0205 03:03:30.642182   52158 main.go:141] libmachine: (test-preload-375673) DBG | domain test-preload-375673 has defined MAC address 52:54:00:51:6e:cf in network mk-test-preload-375673
	I0205 03:03:30.642491   52158 main.go:141] libmachine: (test-preload-375673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:6e:cf", ip: ""} in network mk-test-preload-375673: {Iface:virbr1 ExpiryTime:2025-02-05 03:59:34 +0000 UTC Type:0 Mac:52:54:00:51:6e:cf Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-375673 Clientid:01:52:54:00:51:6e:cf}
	I0205 03:03:30.642514   52158 main.go:141] libmachine: (test-preload-375673) DBG | domain test-preload-375673 has defined IP address 192.168.39.150 and MAC address 52:54:00:51:6e:cf in network mk-test-preload-375673
	I0205 03:03:30.642681   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetSSHPort
	I0205 03:03:30.642889   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetSSHKeyPath
	I0205 03:03:30.643033   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetSSHKeyPath
	I0205 03:03:30.643132   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetSSHUsername
	I0205 03:03:30.643289   52158 main.go:141] libmachine: Using SSH client type: native
	I0205 03:03:30.643494   52158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0205 03:03:30.643516   52158 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0205 03:03:30.853983   52158 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0205 03:03:30.854010   52158 machine.go:96] duration metric: took 840.29649ms to provisionDockerMachine
	I0205 03:03:30.854032   52158 start.go:293] postStartSetup for "test-preload-375673" (driver="kvm2")
	I0205 03:03:30.854047   52158 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0205 03:03:30.854077   52158 main.go:141] libmachine: (test-preload-375673) Calling .DriverName
	I0205 03:03:30.854379   52158 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0205 03:03:30.854427   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetSSHHostname
	I0205 03:03:30.857107   52158 main.go:141] libmachine: (test-preload-375673) DBG | domain test-preload-375673 has defined MAC address 52:54:00:51:6e:cf in network mk-test-preload-375673
	I0205 03:03:30.857439   52158 main.go:141] libmachine: (test-preload-375673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:6e:cf", ip: ""} in network mk-test-preload-375673: {Iface:virbr1 ExpiryTime:2025-02-05 03:59:34 +0000 UTC Type:0 Mac:52:54:00:51:6e:cf Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-375673 Clientid:01:52:54:00:51:6e:cf}
	I0205 03:03:30.857470   52158 main.go:141] libmachine: (test-preload-375673) DBG | domain test-preload-375673 has defined IP address 192.168.39.150 and MAC address 52:54:00:51:6e:cf in network mk-test-preload-375673
	I0205 03:03:30.857672   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetSSHPort
	I0205 03:03:30.857860   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetSSHKeyPath
	I0205 03:03:30.858011   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetSSHUsername
	I0205 03:03:30.858147   52158 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/test-preload-375673/id_rsa Username:docker}
	I0205 03:03:30.935231   52158 ssh_runner.go:195] Run: cat /etc/os-release
	I0205 03:03:30.939273   52158 info.go:137] Remote host: Buildroot 2023.02.9
	I0205 03:03:30.939307   52158 filesync.go:126] Scanning /home/jenkins/minikube-integration/20363-12788/.minikube/addons for local assets ...
	I0205 03:03:30.939390   52158 filesync.go:126] Scanning /home/jenkins/minikube-integration/20363-12788/.minikube/files for local assets ...
	I0205 03:03:30.939524   52158 filesync.go:149] local asset: /home/jenkins/minikube-integration/20363-12788/.minikube/files/etc/ssl/certs/199892.pem -> 199892.pem in /etc/ssl/certs
	I0205 03:03:30.939678   52158 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0205 03:03:30.948457   52158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/files/etc/ssl/certs/199892.pem --> /etc/ssl/certs/199892.pem (1708 bytes)
	I0205 03:03:30.970318   52158 start.go:296] duration metric: took 116.273194ms for postStartSetup
	I0205 03:03:30.970357   52158 fix.go:56] duration metric: took 20.360057275s for fixHost
	I0205 03:03:30.970381   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetSSHHostname
	I0205 03:03:30.972798   52158 main.go:141] libmachine: (test-preload-375673) DBG | domain test-preload-375673 has defined MAC address 52:54:00:51:6e:cf in network mk-test-preload-375673
	I0205 03:03:30.973136   52158 main.go:141] libmachine: (test-preload-375673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:6e:cf", ip: ""} in network mk-test-preload-375673: {Iface:virbr1 ExpiryTime:2025-02-05 03:59:34 +0000 UTC Type:0 Mac:52:54:00:51:6e:cf Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-375673 Clientid:01:52:54:00:51:6e:cf}
	I0205 03:03:30.973160   52158 main.go:141] libmachine: (test-preload-375673) DBG | domain test-preload-375673 has defined IP address 192.168.39.150 and MAC address 52:54:00:51:6e:cf in network mk-test-preload-375673
	I0205 03:03:30.973315   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetSSHPort
	I0205 03:03:30.973515   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetSSHKeyPath
	I0205 03:03:30.973658   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetSSHKeyPath
	I0205 03:03:30.973800   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetSSHUsername
	I0205 03:03:30.973942   52158 main.go:141] libmachine: Using SSH client type: native
	I0205 03:03:30.974112   52158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0205 03:03:30.974125   52158 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0205 03:03:31.070033   52158 main.go:141] libmachine: SSH cmd err, output: <nil>: 1738724611.029845650
	
	I0205 03:03:31.070058   52158 fix.go:216] guest clock: 1738724611.029845650
	I0205 03:03:31.070065   52158 fix.go:229] Guest: 2025-02-05 03:03:31.02984565 +0000 UTC Remote: 2025-02-05 03:03:30.970361598 +0000 UTC m=+27.052168528 (delta=59.484052ms)
	I0205 03:03:31.070083   52158 fix.go:200] guest clock delta is within tolerance: 59.484052ms
	I0205 03:03:31.070088   52158 start.go:83] releasing machines lock for "test-preload-375673", held for 20.459808906s
	I0205 03:03:31.070105   52158 main.go:141] libmachine: (test-preload-375673) Calling .DriverName
	I0205 03:03:31.070361   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetIP
	I0205 03:03:31.073041   52158 main.go:141] libmachine: (test-preload-375673) DBG | domain test-preload-375673 has defined MAC address 52:54:00:51:6e:cf in network mk-test-preload-375673
	I0205 03:03:31.073403   52158 main.go:141] libmachine: (test-preload-375673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:6e:cf", ip: ""} in network mk-test-preload-375673: {Iface:virbr1 ExpiryTime:2025-02-05 03:59:34 +0000 UTC Type:0 Mac:52:54:00:51:6e:cf Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-375673 Clientid:01:52:54:00:51:6e:cf}
	I0205 03:03:31.073435   52158 main.go:141] libmachine: (test-preload-375673) DBG | domain test-preload-375673 has defined IP address 192.168.39.150 and MAC address 52:54:00:51:6e:cf in network mk-test-preload-375673
	I0205 03:03:31.073564   52158 main.go:141] libmachine: (test-preload-375673) Calling .DriverName
	I0205 03:03:31.073983   52158 main.go:141] libmachine: (test-preload-375673) Calling .DriverName
	I0205 03:03:31.074185   52158 main.go:141] libmachine: (test-preload-375673) Calling .DriverName
	I0205 03:03:31.074277   52158 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0205 03:03:31.074322   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetSSHHostname
	I0205 03:03:31.074382   52158 ssh_runner.go:195] Run: cat /version.json
	I0205 03:03:31.074403   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetSSHHostname
	I0205 03:03:31.076965   52158 main.go:141] libmachine: (test-preload-375673) DBG | domain test-preload-375673 has defined MAC address 52:54:00:51:6e:cf in network mk-test-preload-375673
	I0205 03:03:31.077157   52158 main.go:141] libmachine: (test-preload-375673) DBG | domain test-preload-375673 has defined MAC address 52:54:00:51:6e:cf in network mk-test-preload-375673
	I0205 03:03:31.077214   52158 main.go:141] libmachine: (test-preload-375673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:6e:cf", ip: ""} in network mk-test-preload-375673: {Iface:virbr1 ExpiryTime:2025-02-05 03:59:34 +0000 UTC Type:0 Mac:52:54:00:51:6e:cf Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-375673 Clientid:01:52:54:00:51:6e:cf}
	I0205 03:03:31.077238   52158 main.go:141] libmachine: (test-preload-375673) DBG | domain test-preload-375673 has defined IP address 192.168.39.150 and MAC address 52:54:00:51:6e:cf in network mk-test-preload-375673
	I0205 03:03:31.077408   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetSSHPort
	I0205 03:03:31.077579   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetSSHKeyPath
	I0205 03:03:31.077627   52158 main.go:141] libmachine: (test-preload-375673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:6e:cf", ip: ""} in network mk-test-preload-375673: {Iface:virbr1 ExpiryTime:2025-02-05 03:59:34 +0000 UTC Type:0 Mac:52:54:00:51:6e:cf Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-375673 Clientid:01:52:54:00:51:6e:cf}
	I0205 03:03:31.077652   52158 main.go:141] libmachine: (test-preload-375673) DBG | domain test-preload-375673 has defined IP address 192.168.39.150 and MAC address 52:54:00:51:6e:cf in network mk-test-preload-375673
	I0205 03:03:31.077746   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetSSHUsername
	I0205 03:03:31.077845   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetSSHPort
	I0205 03:03:31.077924   52158 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/test-preload-375673/id_rsa Username:docker}
	I0205 03:03:31.077968   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetSSHKeyPath
	I0205 03:03:31.078065   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetSSHUsername
	I0205 03:03:31.078179   52158 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/test-preload-375673/id_rsa Username:docker}
	I0205 03:03:31.150139   52158 ssh_runner.go:195] Run: systemctl --version
	I0205 03:03:31.175585   52158 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0205 03:03:31.316995   52158 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0205 03:03:31.322397   52158 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0205 03:03:31.322457   52158 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0205 03:03:31.337441   52158 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0205 03:03:31.337464   52158 start.go:495] detecting cgroup driver to use...
	I0205 03:03:31.337517   52158 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0205 03:03:31.354204   52158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0205 03:03:31.367187   52158 docker.go:217] disabling cri-docker service (if available) ...
	I0205 03:03:31.367253   52158 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0205 03:03:31.379728   52158 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0205 03:03:31.392053   52158 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0205 03:03:31.515469   52158 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0205 03:03:31.652758   52158 docker.go:233] disabling docker service ...
	I0205 03:03:31.652839   52158 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0205 03:03:31.666705   52158 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0205 03:03:31.679205   52158 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0205 03:03:31.811268   52158 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0205 03:03:31.937983   52158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0205 03:03:31.951605   52158 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0205 03:03:31.969148   52158 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0205 03:03:31.969215   52158 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:03:31.978838   52158 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0205 03:03:31.978907   52158 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:03:31.988758   52158 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:03:31.998277   52158 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:03:32.008133   52158 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0205 03:03:32.018318   52158 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:03:32.027576   52158 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:03:32.043176   52158 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:03:32.052124   52158 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0205 03:03:32.060364   52158 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0205 03:03:32.060416   52158 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0205 03:03:32.072282   52158 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0205 03:03:32.080712   52158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0205 03:03:32.189517   52158 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0205 03:03:32.274823   52158 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0205 03:03:32.274884   52158 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0205 03:03:32.279202   52158 start.go:563] Will wait 60s for crictl version
	I0205 03:03:32.279255   52158 ssh_runner.go:195] Run: which crictl
	I0205 03:03:32.282446   52158 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0205 03:03:32.320662   52158 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0205 03:03:32.320753   52158 ssh_runner.go:195] Run: crio --version
	I0205 03:03:32.347227   52158 ssh_runner.go:195] Run: crio --version
	I0205 03:03:32.377569   52158 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0205 03:03:32.378884   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetIP
	I0205 03:03:32.381471   52158 main.go:141] libmachine: (test-preload-375673) DBG | domain test-preload-375673 has defined MAC address 52:54:00:51:6e:cf in network mk-test-preload-375673
	I0205 03:03:32.381718   52158 main.go:141] libmachine: (test-preload-375673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:6e:cf", ip: ""} in network mk-test-preload-375673: {Iface:virbr1 ExpiryTime:2025-02-05 03:59:34 +0000 UTC Type:0 Mac:52:54:00:51:6e:cf Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-375673 Clientid:01:52:54:00:51:6e:cf}
	I0205 03:03:32.381756   52158 main.go:141] libmachine: (test-preload-375673) DBG | domain test-preload-375673 has defined IP address 192.168.39.150 and MAC address 52:54:00:51:6e:cf in network mk-test-preload-375673
	I0205 03:03:32.381936   52158 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0205 03:03:32.386029   52158 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0205 03:03:32.399488   52158 kubeadm.go:883] updating cluster {Name:test-preload-375673 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-375673 Namespace:defa
ult APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0205 03:03:32.399641   52158 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0205 03:03:32.399689   52158 ssh_runner.go:195] Run: sudo crictl images --output json
	I0205 03:03:32.438302   52158 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0205 03:03:32.438364   52158 ssh_runner.go:195] Run: which lz4
	I0205 03:03:32.442157   52158 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0205 03:03:32.445928   52158 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0205 03:03:32.445960   52158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0205 03:03:33.804554   52158 crio.go:462] duration metric: took 1.362434069s to copy over tarball
	I0205 03:03:33.804622   52158 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0205 03:03:36.161848   52158 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.357198896s)
	I0205 03:03:36.161875   52158 crio.go:469] duration metric: took 2.357295578s to extract the tarball
	I0205 03:03:36.161882   52158 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0205 03:03:36.202084   52158 ssh_runner.go:195] Run: sudo crictl images --output json
	I0205 03:03:36.243546   52158 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0205 03:03:36.243570   52158 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0205 03:03:36.243649   52158 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0205 03:03:36.243674   52158 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0205 03:03:36.243694   52158 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0205 03:03:36.243709   52158 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0205 03:03:36.243727   52158 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0205 03:03:36.243778   52158 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0205 03:03:36.243738   52158 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0205 03:03:36.243663   52158 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0205 03:03:36.245148   52158 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0205 03:03:36.245148   52158 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0205 03:03:36.245147   52158 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0205 03:03:36.245153   52158 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0205 03:03:36.245153   52158 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0205 03:03:36.245164   52158 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0205 03:03:36.245164   52158 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0205 03:03:36.245208   52158 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0205 03:03:36.377799   52158 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0205 03:03:36.378910   52158 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0205 03:03:36.390326   52158 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0205 03:03:36.390897   52158 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0205 03:03:36.408609   52158 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0205 03:03:36.409089   52158 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0205 03:03:36.426648   52158 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0205 03:03:36.463745   52158 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0205 03:03:36.463801   52158 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0205 03:03:36.463849   52158 ssh_runner.go:195] Run: which crictl
	I0205 03:03:36.474341   52158 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0205 03:03:36.474414   52158 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0205 03:03:36.474472   52158 ssh_runner.go:195] Run: which crictl
	I0205 03:03:36.536936   52158 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0205 03:03:36.536984   52158 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0205 03:03:36.537035   52158 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0205 03:03:36.537048   52158 ssh_runner.go:195] Run: which crictl
	I0205 03:03:36.537070   52158 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0205 03:03:36.537073   52158 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0205 03:03:36.537091   52158 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0205 03:03:36.537109   52158 ssh_runner.go:195] Run: which crictl
	I0205 03:03:36.537123   52158 ssh_runner.go:195] Run: which crictl
	I0205 03:03:36.551059   52158 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0205 03:03:36.551098   52158 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0205 03:03:36.551136   52158 ssh_runner.go:195] Run: which crictl
	I0205 03:03:36.558077   52158 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0205 03:03:36.558128   52158 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0205 03:03:36.558173   52158 ssh_runner.go:195] Run: which crictl
	I0205 03:03:36.558179   52158 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0205 03:03:36.558171   52158 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0205 03:03:36.558227   52158 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0205 03:03:36.558240   52158 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0205 03:03:36.558294   52158 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0205 03:03:36.559993   52158 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0205 03:03:36.671902   52158 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0205 03:03:36.671954   52158 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0205 03:03:36.672013   52158 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0205 03:03:36.672078   52158 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0205 03:03:36.675828   52158 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0205 03:03:36.675924   52158 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0205 03:03:36.676125   52158 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0205 03:03:36.813429   52158 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0205 03:03:36.813460   52158 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0205 03:03:36.813519   52158 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0205 03:03:36.813586   52158 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0205 03:03:36.815005   52158 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0205 03:03:36.825539   52158 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0205 03:03:36.825602   52158 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0205 03:03:36.916952   52158 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20363-12788/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0205 03:03:36.917071   52158 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0205 03:03:36.953619   52158 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20363-12788/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0205 03:03:36.953681   52158 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20363-12788/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0205 03:03:36.953717   52158 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20363-12788/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0205 03:03:36.953732   52158 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0205 03:03:36.953767   52158 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0205 03:03:36.953794   52158 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0205 03:03:36.958047   52158 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0205 03:03:36.974529   52158 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20363-12788/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0205 03:03:36.974620   52158 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20363-12788/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0205 03:03:36.974651   52158 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0205 03:03:36.974662   52158 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0205 03:03:36.974668   52158 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0205 03:03:36.974692   52158 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0205 03:03:36.974702   52158 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0205 03:03:36.974705   52158 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0205 03:03:36.974732   52158 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0205 03:03:36.974750   52158 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0205 03:03:37.010529   52158 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20363-12788/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0205 03:03:37.010626   52158 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0205 03:03:37.010639   52158 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0205 03:03:37.163024   52158 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0205 03:03:39.739326   52158 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.7: (2.76459737s)
	I0205 03:03:39.739364   52158 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20363-12788/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0205 03:03:39.739364   52158 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4: (2.728722239s)
	I0205 03:03:39.739336   52158 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4: (2.764625287s)
	I0205 03:03:39.739390   52158 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0205 03:03:39.739398   52158 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0205 03:03:39.739410   52158 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0205 03:03:39.739439   52158 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0205 03:03:39.739468   52158 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.576415887s)
	I0205 03:03:41.789354   52158 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.049870677s)
	I0205 03:03:41.789407   52158 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20363-12788/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0205 03:03:41.789443   52158 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0205 03:03:41.789503   52158 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0205 03:03:42.238323   52158 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20363-12788/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0205 03:03:42.238381   52158 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0205 03:03:42.238449   52158 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0205 03:03:42.987216   52158 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20363-12788/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0205 03:03:42.987267   52158 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0205 03:03:42.987309   52158 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0205 03:03:43.329317   52158 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20363-12788/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0205 03:03:43.329368   52158 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0205 03:03:43.329417   52158 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0205 03:03:43.970816   52158 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20363-12788/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0205 03:03:43.970850   52158 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0205 03:03:43.970899   52158 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0205 03:03:44.812000   52158 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20363-12788/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0205 03:03:44.812049   52158 cache_images.go:123] Successfully loaded all cached images
	I0205 03:03:44.812058   52158 cache_images.go:92] duration metric: took 8.568475852s to LoadCachedImages
	I0205 03:03:44.812072   52158 kubeadm.go:934] updating node { 192.168.39.150 8443 v1.24.4 crio true true} ...
	I0205 03:03:44.812199   52158 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-375673 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.150
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-375673 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0205 03:03:44.812264   52158 ssh_runner.go:195] Run: crio config
	I0205 03:03:44.858972   52158 cni.go:84] Creating CNI manager for ""
	I0205 03:03:44.858998   52158 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0205 03:03:44.859006   52158 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0205 03:03:44.859030   52158 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.150 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-375673 NodeName:test-preload-375673 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.150"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.150 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0205 03:03:44.859179   52158 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.150
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-375673"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.150
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.150"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0205 03:03:44.859266   52158 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0205 03:03:44.869260   52158 binaries.go:44] Found k8s binaries, skipping transfer
	I0205 03:03:44.869363   52158 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0205 03:03:44.878481   52158 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0205 03:03:44.894581   52158 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0205 03:03:44.910357   52158 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0205 03:03:44.926651   52158 ssh_runner.go:195] Run: grep 192.168.39.150	control-plane.minikube.internal$ /etc/hosts
	I0205 03:03:44.930290   52158 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.150	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0205 03:03:44.941832   52158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0205 03:03:45.051416   52158 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0205 03:03:45.067041   52158 certs.go:68] Setting up /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/test-preload-375673 for IP: 192.168.39.150
	I0205 03:03:45.067067   52158 certs.go:194] generating shared ca certs ...
	I0205 03:03:45.067090   52158 certs.go:226] acquiring lock for ca certs: {Name:mkca3cbb562bd1b517883c3b07fce8acc6cc9038 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:03:45.067255   52158 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20363-12788/.minikube/ca.key
	I0205 03:03:45.067296   52158 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20363-12788/.minikube/proxy-client-ca.key
	I0205 03:03:45.067307   52158 certs.go:256] generating profile certs ...
	I0205 03:03:45.067377   52158 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/test-preload-375673/client.key
	I0205 03:03:45.067441   52158 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/test-preload-375673/apiserver.key.7621ec93
	I0205 03:03:45.067483   52158 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/test-preload-375673/proxy-client.key
	I0205 03:03:45.067607   52158 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/19989.pem (1338 bytes)
	W0205 03:03:45.067652   52158 certs.go:480] ignoring /home/jenkins/minikube-integration/20363-12788/.minikube/certs/19989_empty.pem, impossibly tiny 0 bytes
	I0205 03:03:45.067677   52158 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca-key.pem (1679 bytes)
	I0205 03:03:45.067707   52158 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem (1082 bytes)
	I0205 03:03:45.067746   52158 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/cert.pem (1123 bytes)
	I0205 03:03:45.067774   52158 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/key.pem (1675 bytes)
	I0205 03:03:45.067814   52158 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/files/etc/ssl/certs/199892.pem (1708 bytes)
	I0205 03:03:45.068491   52158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0205 03:03:45.101059   52158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0205 03:03:45.132510   52158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0205 03:03:45.163339   52158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0205 03:03:45.196178   52158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/test-preload-375673/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0205 03:03:45.242492   52158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/test-preload-375673/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0205 03:03:45.279752   52158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/test-preload-375673/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0205 03:03:45.303012   52158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/test-preload-375673/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0205 03:03:45.326113   52158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/certs/19989.pem --> /usr/share/ca-certificates/19989.pem (1338 bytes)
	I0205 03:03:45.348476   52158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/files/etc/ssl/certs/199892.pem --> /usr/share/ca-certificates/199892.pem (1708 bytes)
	I0205 03:03:45.371139   52158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0205 03:03:45.394034   52158 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0205 03:03:45.410460   52158 ssh_runner.go:195] Run: openssl version
	I0205 03:03:45.415969   52158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0205 03:03:45.426428   52158 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0205 03:03:45.430664   52158 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb  5 02:04 /usr/share/ca-certificates/minikubeCA.pem
	I0205 03:03:45.430719   52158 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0205 03:03:45.436234   52158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0205 03:03:45.447659   52158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19989.pem && ln -fs /usr/share/ca-certificates/19989.pem /etc/ssl/certs/19989.pem"
	I0205 03:03:45.459002   52158 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19989.pem
	I0205 03:03:45.463628   52158 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb  5 02:11 /usr/share/ca-certificates/19989.pem
	I0205 03:03:45.463689   52158 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19989.pem
	I0205 03:03:45.469414   52158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/19989.pem /etc/ssl/certs/51391683.0"
	I0205 03:03:45.480549   52158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/199892.pem && ln -fs /usr/share/ca-certificates/199892.pem /etc/ssl/certs/199892.pem"
	I0205 03:03:45.492334   52158 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/199892.pem
	I0205 03:03:45.497024   52158 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb  5 02:11 /usr/share/ca-certificates/199892.pem
	I0205 03:03:45.497085   52158 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/199892.pem
	I0205 03:03:45.502612   52158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/199892.pem /etc/ssl/certs/3ec20f2e.0"
	I0205 03:03:45.513028   52158 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0205 03:03:45.517475   52158 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0205 03:03:45.523249   52158 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0205 03:03:45.528923   52158 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0205 03:03:45.534477   52158 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0205 03:03:45.539980   52158 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0205 03:03:45.545425   52158 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0205 03:03:45.550837   52158 kubeadm.go:392] StartCluster: {Name:test-preload-375673 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-375673 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0205 03:03:45.550952   52158 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0205 03:03:45.551015   52158 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0205 03:03:45.588218   52158 cri.go:89] found id: ""
	I0205 03:03:45.588312   52158 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0205 03:03:45.598105   52158 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0205 03:03:45.598133   52158 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0205 03:03:45.598171   52158 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0205 03:03:45.607536   52158 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0205 03:03:45.608042   52158 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-375673" does not appear in /home/jenkins/minikube-integration/20363-12788/kubeconfig
	I0205 03:03:45.608165   52158 kubeconfig.go:62] /home/jenkins/minikube-integration/20363-12788/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-375673" cluster setting kubeconfig missing "test-preload-375673" context setting]
	I0205 03:03:45.608447   52158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/kubeconfig: {Name:mkb405c4292c681fd728af1af9684132d7e45754 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:03:45.609000   52158 kapi.go:59] client config for test-preload-375673: &rest.Config{Host:"https://192.168.39.150:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20363-12788/.minikube/profiles/test-preload-375673/client.crt", KeyFile:"/home/jenkins/minikube-integration/20363-12788/.minikube/profiles/test-preload-375673/client.key", CAFile:"/home/jenkins/minikube-integration/20363-12788/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24db320), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0205 03:03:45.609451   52158 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0205 03:03:45.609470   52158 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0205 03:03:45.609475   52158 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0205 03:03:45.609482   52158 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0205 03:03:45.609831   52158 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0205 03:03:45.619283   52158 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.150
	I0205 03:03:45.619320   52158 kubeadm.go:1160] stopping kube-system containers ...
	I0205 03:03:45.619338   52158 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0205 03:03:45.619387   52158 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0205 03:03:45.655612   52158 cri.go:89] found id: ""
	I0205 03:03:45.655698   52158 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0205 03:03:45.671821   52158 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0205 03:03:45.681050   52158 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0205 03:03:45.681072   52158 kubeadm.go:157] found existing configuration files:
	
	I0205 03:03:45.681111   52158 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0205 03:03:45.689851   52158 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0205 03:03:45.689913   52158 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0205 03:03:45.698997   52158 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0205 03:03:45.707635   52158 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0205 03:03:45.707705   52158 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0205 03:03:45.716482   52158 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0205 03:03:45.725299   52158 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0205 03:03:45.725366   52158 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0205 03:03:45.734884   52158 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0205 03:03:45.743589   52158 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0205 03:03:45.743647   52158 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0205 03:03:45.752865   52158 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0205 03:03:45.761845   52158 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0205 03:03:45.856706   52158 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0205 03:03:46.623069   52158 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0205 03:03:46.903114   52158 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0205 03:03:46.959191   52158 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0205 03:03:47.009774   52158 api_server.go:52] waiting for apiserver process to appear ...
	I0205 03:03:47.009850   52158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:03:47.510796   52158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:03:48.010171   52158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:03:48.028049   52158 api_server.go:72] duration metric: took 1.01827265s to wait for apiserver process to appear ...
	I0205 03:03:48.028083   52158 api_server.go:88] waiting for apiserver healthz status ...
	I0205 03:03:48.028108   52158 api_server.go:253] Checking apiserver healthz at https://192.168.39.150:8443/healthz ...
	I0205 03:03:48.028588   52158 api_server.go:269] stopped: https://192.168.39.150:8443/healthz: Get "https://192.168.39.150:8443/healthz": dial tcp 192.168.39.150:8443: connect: connection refused
	I0205 03:03:48.528241   52158 api_server.go:253] Checking apiserver healthz at https://192.168.39.150:8443/healthz ...
	I0205 03:03:51.788281   52158 api_server.go:279] https://192.168.39.150:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0205 03:03:51.788316   52158 api_server.go:103] status: https://192.168.39.150:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0205 03:03:51.788335   52158 api_server.go:253] Checking apiserver healthz at https://192.168.39.150:8443/healthz ...
	I0205 03:03:51.814420   52158 api_server.go:279] https://192.168.39.150:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0205 03:03:51.814454   52158 api_server.go:103] status: https://192.168.39.150:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0205 03:03:52.028846   52158 api_server.go:253] Checking apiserver healthz at https://192.168.39.150:8443/healthz ...
	I0205 03:03:52.034539   52158 api_server.go:279] https://192.168.39.150:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0205 03:03:52.034565   52158 api_server.go:103] status: https://192.168.39.150:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0205 03:03:52.528199   52158 api_server.go:253] Checking apiserver healthz at https://192.168.39.150:8443/healthz ...
	I0205 03:03:52.533600   52158 api_server.go:279] https://192.168.39.150:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0205 03:03:52.533628   52158 api_server.go:103] status: https://192.168.39.150:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0205 03:03:53.028265   52158 api_server.go:253] Checking apiserver healthz at https://192.168.39.150:8443/healthz ...
	I0205 03:03:53.033412   52158 api_server.go:279] https://192.168.39.150:8443/healthz returned 200:
	ok
	I0205 03:03:53.043526   52158 api_server.go:141] control plane version: v1.24.4
	I0205 03:03:53.043558   52158 api_server.go:131] duration metric: took 5.015467146s to wait for apiserver health ...
	I0205 03:03:53.043569   52158 cni.go:84] Creating CNI manager for ""
	I0205 03:03:53.043578   52158 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0205 03:03:53.044893   52158 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0205 03:03:53.046103   52158 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0205 03:03:53.067060   52158 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0205 03:03:53.106175   52158 system_pods.go:43] waiting for kube-system pods to appear ...
	I0205 03:03:53.112246   52158 system_pods.go:59] 7 kube-system pods found
	I0205 03:03:53.112308   52158 system_pods.go:61] "coredns-6d4b75cb6d-fbjgg" [008c2ab9-feeb-4aa4-a4bf-a9cd25eb386a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0205 03:03:53.112323   52158 system_pods.go:61] "etcd-test-preload-375673" [9adc6b00-fdf8-4804-b601-bc142eb6b9a8] Running
	I0205 03:03:53.112334   52158 system_pods.go:61] "kube-apiserver-test-preload-375673" [3f878349-abfb-4ce9-9161-e9ff8ddb2fab] Running
	I0205 03:03:53.112344   52158 system_pods.go:61] "kube-controller-manager-test-preload-375673" [5c6dd5d8-0564-4a5d-921e-d154d68f71fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0205 03:03:53.112355   52158 system_pods.go:61] "kube-proxy-lm8m5" [6281cff6-09fd-4137-bc0c-62b443c5ca40] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0205 03:03:53.112365   52158 system_pods.go:61] "kube-scheduler-test-preload-375673" [a7c400eb-4935-451b-8cdb-8a8e12796c73] Running
	I0205 03:03:53.112373   52158 system_pods.go:61] "storage-provisioner" [4274ff6e-b8f0-4d7b-b97f-000aabec7f1b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0205 03:03:53.112383   52158 system_pods.go:74] duration metric: took 6.182089ms to wait for pod list to return data ...
	I0205 03:03:53.112392   52158 node_conditions.go:102] verifying NodePressure condition ...
	I0205 03:03:53.114997   52158 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0205 03:03:53.115025   52158 node_conditions.go:123] node cpu capacity is 2
	I0205 03:03:53.115038   52158 node_conditions.go:105] duration metric: took 2.637738ms to run NodePressure ...
	I0205 03:03:53.115057   52158 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0205 03:03:53.393777   52158 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0205 03:03:53.396900   52158 kubeadm.go:739] kubelet initialised
	I0205 03:03:53.396922   52158 kubeadm.go:740] duration metric: took 3.115936ms waiting for restarted kubelet to initialise ...
	I0205 03:03:53.396930   52158 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0205 03:03:53.399846   52158 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-fbjgg" in "kube-system" namespace to be "Ready" ...
	I0205 03:03:53.403686   52158 pod_ready.go:98] node "test-preload-375673" hosting pod "coredns-6d4b75cb6d-fbjgg" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-375673" has status "Ready":"False"
	I0205 03:03:53.403714   52158 pod_ready.go:82] duration metric: took 3.838381ms for pod "coredns-6d4b75cb6d-fbjgg" in "kube-system" namespace to be "Ready" ...
	E0205 03:03:53.403726   52158 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-375673" hosting pod "coredns-6d4b75cb6d-fbjgg" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-375673" has status "Ready":"False"
	I0205 03:03:53.403736   52158 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-375673" in "kube-system" namespace to be "Ready" ...
	I0205 03:03:53.408908   52158 pod_ready.go:98] node "test-preload-375673" hosting pod "etcd-test-preload-375673" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-375673" has status "Ready":"False"
	I0205 03:03:53.408930   52158 pod_ready.go:82] duration metric: took 5.179229ms for pod "etcd-test-preload-375673" in "kube-system" namespace to be "Ready" ...
	E0205 03:03:53.408941   52158 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-375673" hosting pod "etcd-test-preload-375673" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-375673" has status "Ready":"False"
	I0205 03:03:53.408953   52158 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-375673" in "kube-system" namespace to be "Ready" ...
	I0205 03:03:53.419356   52158 pod_ready.go:98] node "test-preload-375673" hosting pod "kube-apiserver-test-preload-375673" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-375673" has status "Ready":"False"
	I0205 03:03:53.419389   52158 pod_ready.go:82] duration metric: took 10.423888ms for pod "kube-apiserver-test-preload-375673" in "kube-system" namespace to be "Ready" ...
	E0205 03:03:53.419400   52158 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-375673" hosting pod "kube-apiserver-test-preload-375673" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-375673" has status "Ready":"False"
	I0205 03:03:53.419408   52158 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-375673" in "kube-system" namespace to be "Ready" ...
	I0205 03:03:53.510224   52158 pod_ready.go:98] node "test-preload-375673" hosting pod "kube-controller-manager-test-preload-375673" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-375673" has status "Ready":"False"
	I0205 03:03:53.510259   52158 pod_ready.go:82] duration metric: took 90.837724ms for pod "kube-controller-manager-test-preload-375673" in "kube-system" namespace to be "Ready" ...
	E0205 03:03:53.510273   52158 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-375673" hosting pod "kube-controller-manager-test-preload-375673" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-375673" has status "Ready":"False"
	I0205 03:03:53.510282   52158 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-lm8m5" in "kube-system" namespace to be "Ready" ...
	I0205 03:03:53.910426   52158 pod_ready.go:98] node "test-preload-375673" hosting pod "kube-proxy-lm8m5" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-375673" has status "Ready":"False"
	I0205 03:03:53.910455   52158 pod_ready.go:82] duration metric: took 400.160951ms for pod "kube-proxy-lm8m5" in "kube-system" namespace to be "Ready" ...
	E0205 03:03:53.910470   52158 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-375673" hosting pod "kube-proxy-lm8m5" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-375673" has status "Ready":"False"
	I0205 03:03:53.910476   52158 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-375673" in "kube-system" namespace to be "Ready" ...
	I0205 03:03:54.309269   52158 pod_ready.go:98] node "test-preload-375673" hosting pod "kube-scheduler-test-preload-375673" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-375673" has status "Ready":"False"
	I0205 03:03:54.309303   52158 pod_ready.go:82] duration metric: took 398.819262ms for pod "kube-scheduler-test-preload-375673" in "kube-system" namespace to be "Ready" ...
	E0205 03:03:54.309316   52158 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-375673" hosting pod "kube-scheduler-test-preload-375673" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-375673" has status "Ready":"False"
	I0205 03:03:54.309353   52158 pod_ready.go:39] duration metric: took 912.387834ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0205 03:03:54.309394   52158 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0205 03:03:54.320243   52158 ops.go:34] apiserver oom_adj: -16
	I0205 03:03:54.320257   52158 kubeadm.go:597] duration metric: took 8.722118923s to restartPrimaryControlPlane
	I0205 03:03:54.320264   52158 kubeadm.go:394] duration metric: took 8.769435937s to StartCluster
	I0205 03:03:54.320279   52158 settings.go:142] acquiring lock: {Name:mk2eca847da5ba78f5b041a83e5cfcbdebb0c621 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:03:54.320349   52158 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20363-12788/kubeconfig
	I0205 03:03:54.320934   52158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/kubeconfig: {Name:mkb405c4292c681fd728af1af9684132d7e45754 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:03:54.321160   52158 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0205 03:03:54.321254   52158 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0205 03:03:54.321359   52158 addons.go:69] Setting storage-provisioner=true in profile "test-preload-375673"
	I0205 03:03:54.321381   52158 addons.go:69] Setting default-storageclass=true in profile "test-preload-375673"
	I0205 03:03:54.321393   52158 addons.go:238] Setting addon storage-provisioner=true in "test-preload-375673"
	W0205 03:03:54.321405   52158 addons.go:247] addon storage-provisioner should already be in state true
	I0205 03:03:54.321406   52158 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-375673"
	I0205 03:03:54.321439   52158 host.go:66] Checking if "test-preload-375673" exists ...
	I0205 03:03:54.321483   52158 config.go:182] Loaded profile config "test-preload-375673": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0205 03:03:54.321809   52158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:03:54.321806   52158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:03:54.321862   52158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:03:54.321887   52158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:03:54.322636   52158 out.go:177] * Verifying Kubernetes components...
	I0205 03:03:54.323745   52158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0205 03:03:54.336366   52158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44429
	I0205 03:03:54.336648   52158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41763
	I0205 03:03:54.336845   52158 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:03:54.337005   52158 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:03:54.337372   52158 main.go:141] libmachine: Using API Version  1
	I0205 03:03:54.337391   52158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:03:54.337503   52158 main.go:141] libmachine: Using API Version  1
	I0205 03:03:54.337528   52158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:03:54.337744   52158 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:03:54.337952   52158 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:03:54.337952   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetState
	I0205 03:03:54.338442   52158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:03:54.338502   52158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:03:54.339956   52158 kapi.go:59] client config for test-preload-375673: &rest.Config{Host:"https://192.168.39.150:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20363-12788/.minikube/profiles/test-preload-375673/client.crt", KeyFile:"/home/jenkins/minikube-integration/20363-12788/.minikube/profiles/test-preload-375673/client.key", CAFile:"/home/jenkins/minikube-integration/20363-12788/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24db320), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0205 03:03:54.340281   52158 addons.go:238] Setting addon default-storageclass=true in "test-preload-375673"
	W0205 03:03:54.340305   52158 addons.go:247] addon default-storageclass should already be in state true
	I0205 03:03:54.340336   52158 host.go:66] Checking if "test-preload-375673" exists ...
	I0205 03:03:54.340597   52158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:03:54.340638   52158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:03:54.352802   52158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43861
	I0205 03:03:54.353251   52158 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:03:54.353794   52158 main.go:141] libmachine: Using API Version  1
	I0205 03:03:54.353817   52158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:03:54.354003   52158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44065
	I0205 03:03:54.354122   52158 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:03:54.354274   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetState
	I0205 03:03:54.354390   52158 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:03:54.354921   52158 main.go:141] libmachine: Using API Version  1
	I0205 03:03:54.354940   52158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:03:54.355262   52158 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:03:54.355820   52158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:03:54.355855   52158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:03:54.355949   52158 main.go:141] libmachine: (test-preload-375673) Calling .DriverName
	I0205 03:03:54.357570   52158 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0205 03:03:54.358611   52158 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0205 03:03:54.358625   52158 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0205 03:03:54.358638   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetSSHHostname
	I0205 03:03:54.360987   52158 main.go:141] libmachine: (test-preload-375673) DBG | domain test-preload-375673 has defined MAC address 52:54:00:51:6e:cf in network mk-test-preload-375673
	I0205 03:03:54.361319   52158 main.go:141] libmachine: (test-preload-375673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:6e:cf", ip: ""} in network mk-test-preload-375673: {Iface:virbr1 ExpiryTime:2025-02-05 03:59:34 +0000 UTC Type:0 Mac:52:54:00:51:6e:cf Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-375673 Clientid:01:52:54:00:51:6e:cf}
	I0205 03:03:54.361364   52158 main.go:141] libmachine: (test-preload-375673) DBG | domain test-preload-375673 has defined IP address 192.168.39.150 and MAC address 52:54:00:51:6e:cf in network mk-test-preload-375673
	I0205 03:03:54.361520   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetSSHPort
	I0205 03:03:54.361667   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetSSHKeyPath
	I0205 03:03:54.361777   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetSSHUsername
	I0205 03:03:54.361903   52158 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/test-preload-375673/id_rsa Username:docker}
	I0205 03:03:54.387325   52158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42693
	I0205 03:03:54.387705   52158 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:03:54.388065   52158 main.go:141] libmachine: Using API Version  1
	I0205 03:03:54.388086   52158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:03:54.388406   52158 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:03:54.388566   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetState
	I0205 03:03:54.389897   52158 main.go:141] libmachine: (test-preload-375673) Calling .DriverName
	I0205 03:03:54.390108   52158 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0205 03:03:54.390124   52158 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0205 03:03:54.390140   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetSSHHostname
	I0205 03:03:54.392752   52158 main.go:141] libmachine: (test-preload-375673) DBG | domain test-preload-375673 has defined MAC address 52:54:00:51:6e:cf in network mk-test-preload-375673
	I0205 03:03:54.393131   52158 main.go:141] libmachine: (test-preload-375673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:6e:cf", ip: ""} in network mk-test-preload-375673: {Iface:virbr1 ExpiryTime:2025-02-05 03:59:34 +0000 UTC Type:0 Mac:52:54:00:51:6e:cf Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:test-preload-375673 Clientid:01:52:54:00:51:6e:cf}
	I0205 03:03:54.393156   52158 main.go:141] libmachine: (test-preload-375673) DBG | domain test-preload-375673 has defined IP address 192.168.39.150 and MAC address 52:54:00:51:6e:cf in network mk-test-preload-375673
	I0205 03:03:54.393328   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetSSHPort
	I0205 03:03:54.393506   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetSSHKeyPath
	I0205 03:03:54.393642   52158 main.go:141] libmachine: (test-preload-375673) Calling .GetSSHUsername
	I0205 03:03:54.393761   52158 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/test-preload-375673/id_rsa Username:docker}
	I0205 03:03:54.512102   52158 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0205 03:03:54.527083   52158 node_ready.go:35] waiting up to 6m0s for node "test-preload-375673" to be "Ready" ...
	I0205 03:03:54.637779   52158 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0205 03:03:54.663554   52158 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0205 03:03:55.492137   52158 main.go:141] libmachine: Making call to close driver server
	I0205 03:03:55.492164   52158 main.go:141] libmachine: (test-preload-375673) Calling .Close
	I0205 03:03:55.492259   52158 main.go:141] libmachine: Making call to close driver server
	I0205 03:03:55.492269   52158 main.go:141] libmachine: (test-preload-375673) Calling .Close
	I0205 03:03:55.492471   52158 main.go:141] libmachine: Successfully made call to close driver server
	I0205 03:03:55.492484   52158 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 03:03:55.492499   52158 main.go:141] libmachine: (test-preload-375673) DBG | Closing plugin on server side
	I0205 03:03:55.492500   52158 main.go:141] libmachine: Making call to close driver server
	I0205 03:03:55.492503   52158 main.go:141] libmachine: Successfully made call to close driver server
	I0205 03:03:55.492515   52158 main.go:141] libmachine: (test-preload-375673) Calling .Close
	I0205 03:03:55.492543   52158 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 03:03:55.492544   52158 main.go:141] libmachine: (test-preload-375673) DBG | Closing plugin on server side
	I0205 03:03:55.492553   52158 main.go:141] libmachine: Making call to close driver server
	I0205 03:03:55.492568   52158 main.go:141] libmachine: (test-preload-375673) Calling .Close
	I0205 03:03:55.492722   52158 main.go:141] libmachine: Successfully made call to close driver server
	I0205 03:03:55.492738   52158 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 03:03:55.492765   52158 main.go:141] libmachine: (test-preload-375673) DBG | Closing plugin on server side
	I0205 03:03:55.492846   52158 main.go:141] libmachine: (test-preload-375673) DBG | Closing plugin on server side
	I0205 03:03:55.492874   52158 main.go:141] libmachine: Successfully made call to close driver server
	I0205 03:03:55.492885   52158 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 03:03:55.499462   52158 main.go:141] libmachine: Making call to close driver server
	I0205 03:03:55.499484   52158 main.go:141] libmachine: (test-preload-375673) Calling .Close
	I0205 03:03:55.499696   52158 main.go:141] libmachine: Successfully made call to close driver server
	I0205 03:03:55.499710   52158 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 03:03:55.499727   52158 main.go:141] libmachine: (test-preload-375673) DBG | Closing plugin on server side
	I0205 03:03:55.501526   52158 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0205 03:03:55.502847   52158 addons.go:514] duration metric: took 1.181600034s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0205 03:03:56.531360   52158 node_ready.go:53] node "test-preload-375673" has status "Ready":"False"
	I0205 03:03:59.030517   52158 node_ready.go:53] node "test-preload-375673" has status "Ready":"False"
	I0205 03:04:01.030726   52158 node_ready.go:53] node "test-preload-375673" has status "Ready":"False"
	I0205 03:04:02.031761   52158 node_ready.go:49] node "test-preload-375673" has status "Ready":"True"
	I0205 03:04:02.031794   52158 node_ready.go:38] duration metric: took 7.504679413s for node "test-preload-375673" to be "Ready" ...
	I0205 03:04:02.031805   52158 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0205 03:04:02.037031   52158 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-fbjgg" in "kube-system" namespace to be "Ready" ...
	I0205 03:04:02.041454   52158 pod_ready.go:93] pod "coredns-6d4b75cb6d-fbjgg" in "kube-system" namespace has status "Ready":"True"
	I0205 03:04:02.041482   52158 pod_ready.go:82] duration metric: took 4.423284ms for pod "coredns-6d4b75cb6d-fbjgg" in "kube-system" namespace to be "Ready" ...
	I0205 03:04:02.041495   52158 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-375673" in "kube-system" namespace to be "Ready" ...
	I0205 03:04:04.053558   52158 pod_ready.go:103] pod "etcd-test-preload-375673" in "kube-system" namespace has status "Ready":"False"
	I0205 03:04:04.547743   52158 pod_ready.go:93] pod "etcd-test-preload-375673" in "kube-system" namespace has status "Ready":"True"
	I0205 03:04:04.547775   52158 pod_ready.go:82] duration metric: took 2.506270133s for pod "etcd-test-preload-375673" in "kube-system" namespace to be "Ready" ...
	I0205 03:04:04.547789   52158 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-375673" in "kube-system" namespace to be "Ready" ...
	I0205 03:04:04.551747   52158 pod_ready.go:93] pod "kube-apiserver-test-preload-375673" in "kube-system" namespace has status "Ready":"True"
	I0205 03:04:04.551766   52158 pod_ready.go:82] duration metric: took 3.968781ms for pod "kube-apiserver-test-preload-375673" in "kube-system" namespace to be "Ready" ...
	I0205 03:04:04.551775   52158 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-375673" in "kube-system" namespace to be "Ready" ...
	I0205 03:04:05.557760   52158 pod_ready.go:93] pod "kube-controller-manager-test-preload-375673" in "kube-system" namespace has status "Ready":"True"
	I0205 03:04:05.557791   52158 pod_ready.go:82] duration metric: took 1.006008025s for pod "kube-controller-manager-test-preload-375673" in "kube-system" namespace to be "Ready" ...
	I0205 03:04:05.557815   52158 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-lm8m5" in "kube-system" namespace to be "Ready" ...
	I0205 03:04:05.563021   52158 pod_ready.go:93] pod "kube-proxy-lm8m5" in "kube-system" namespace has status "Ready":"True"
	I0205 03:04:05.563046   52158 pod_ready.go:82] duration metric: took 5.223338ms for pod "kube-proxy-lm8m5" in "kube-system" namespace to be "Ready" ...
	I0205 03:04:05.563057   52158 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-375673" in "kube-system" namespace to be "Ready" ...
	I0205 03:04:05.631186   52158 pod_ready.go:93] pod "kube-scheduler-test-preload-375673" in "kube-system" namespace has status "Ready":"True"
	I0205 03:04:05.631225   52158 pod_ready.go:82] duration metric: took 68.161159ms for pod "kube-scheduler-test-preload-375673" in "kube-system" namespace to be "Ready" ...
	I0205 03:04:05.631240   52158 pod_ready.go:39] duration metric: took 3.599421445s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0205 03:04:05.631258   52158 api_server.go:52] waiting for apiserver process to appear ...
	I0205 03:04:05.631339   52158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:04:05.647155   52158 api_server.go:72] duration metric: took 11.325964684s to wait for apiserver process to appear ...
	I0205 03:04:05.647191   52158 api_server.go:88] waiting for apiserver healthz status ...
	I0205 03:04:05.647217   52158 api_server.go:253] Checking apiserver healthz at https://192.168.39.150:8443/healthz ...
	I0205 03:04:05.652712   52158 api_server.go:279] https://192.168.39.150:8443/healthz returned 200:
	ok
	I0205 03:04:05.653739   52158 api_server.go:141] control plane version: v1.24.4
	I0205 03:04:05.653760   52158 api_server.go:131] duration metric: took 6.562867ms to wait for apiserver health ...
	I0205 03:04:05.653767   52158 system_pods.go:43] waiting for kube-system pods to appear ...
	I0205 03:04:05.831775   52158 system_pods.go:59] 7 kube-system pods found
	I0205 03:04:05.831802   52158 system_pods.go:61] "coredns-6d4b75cb6d-fbjgg" [008c2ab9-feeb-4aa4-a4bf-a9cd25eb386a] Running
	I0205 03:04:05.831807   52158 system_pods.go:61] "etcd-test-preload-375673" [9adc6b00-fdf8-4804-b601-bc142eb6b9a8] Running
	I0205 03:04:05.831811   52158 system_pods.go:61] "kube-apiserver-test-preload-375673" [3f878349-abfb-4ce9-9161-e9ff8ddb2fab] Running
	I0205 03:04:05.831815   52158 system_pods.go:61] "kube-controller-manager-test-preload-375673" [5c6dd5d8-0564-4a5d-921e-d154d68f71fb] Running
	I0205 03:04:05.831818   52158 system_pods.go:61] "kube-proxy-lm8m5" [6281cff6-09fd-4137-bc0c-62b443c5ca40] Running
	I0205 03:04:05.831821   52158 system_pods.go:61] "kube-scheduler-test-preload-375673" [a7c400eb-4935-451b-8cdb-8a8e12796c73] Running
	I0205 03:04:05.831824   52158 system_pods.go:61] "storage-provisioner" [4274ff6e-b8f0-4d7b-b97f-000aabec7f1b] Running
	I0205 03:04:05.831829   52158 system_pods.go:74] duration metric: took 178.057433ms to wait for pod list to return data ...
	I0205 03:04:05.831836   52158 default_sa.go:34] waiting for default service account to be created ...
	I0205 03:04:06.030911   52158 default_sa.go:45] found service account: "default"
	I0205 03:04:06.030938   52158 default_sa.go:55] duration metric: took 199.097473ms for default service account to be created ...
	I0205 03:04:06.030947   52158 system_pods.go:116] waiting for k8s-apps to be running ...
	I0205 03:04:06.233254   52158 system_pods.go:86] 7 kube-system pods found
	I0205 03:04:06.233282   52158 system_pods.go:89] "coredns-6d4b75cb6d-fbjgg" [008c2ab9-feeb-4aa4-a4bf-a9cd25eb386a] Running
	I0205 03:04:06.233287   52158 system_pods.go:89] "etcd-test-preload-375673" [9adc6b00-fdf8-4804-b601-bc142eb6b9a8] Running
	I0205 03:04:06.233291   52158 system_pods.go:89] "kube-apiserver-test-preload-375673" [3f878349-abfb-4ce9-9161-e9ff8ddb2fab] Running
	I0205 03:04:06.233295   52158 system_pods.go:89] "kube-controller-manager-test-preload-375673" [5c6dd5d8-0564-4a5d-921e-d154d68f71fb] Running
	I0205 03:04:06.233298   52158 system_pods.go:89] "kube-proxy-lm8m5" [6281cff6-09fd-4137-bc0c-62b443c5ca40] Running
	I0205 03:04:06.233302   52158 system_pods.go:89] "kube-scheduler-test-preload-375673" [a7c400eb-4935-451b-8cdb-8a8e12796c73] Running
	I0205 03:04:06.233304   52158 system_pods.go:89] "storage-provisioner" [4274ff6e-b8f0-4d7b-b97f-000aabec7f1b] Running
	I0205 03:04:06.233311   52158 system_pods.go:126] duration metric: took 202.358011ms to wait for k8s-apps to be running ...
	I0205 03:04:06.233321   52158 system_svc.go:44] waiting for kubelet service to be running ....
	I0205 03:04:06.233399   52158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0205 03:04:06.248435   52158 system_svc.go:56] duration metric: took 15.105236ms WaitForService to wait for kubelet
	I0205 03:04:06.248477   52158 kubeadm.go:582] duration metric: took 11.927288945s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0205 03:04:06.248517   52158 node_conditions.go:102] verifying NodePressure condition ...
	I0205 03:04:06.430568   52158 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0205 03:04:06.430606   52158 node_conditions.go:123] node cpu capacity is 2
	I0205 03:04:06.430629   52158 node_conditions.go:105] duration metric: took 182.106166ms to run NodePressure ...
	I0205 03:04:06.430643   52158 start.go:241] waiting for startup goroutines ...
	I0205 03:04:06.430652   52158 start.go:246] waiting for cluster config update ...
	I0205 03:04:06.430671   52158 start.go:255] writing updated cluster config ...
	I0205 03:04:06.430993   52158 ssh_runner.go:195] Run: rm -f paused
	I0205 03:04:06.478668   52158 start.go:600] kubectl: 1.32.1, cluster: 1.24.4 (minor skew: 8)
	I0205 03:04:06.480641   52158 out.go:201] 
	W0205 03:04:06.482001   52158 out.go:270] ! /usr/local/bin/kubectl is version 1.32.1, which may have incompatibilities with Kubernetes 1.24.4.
	I0205 03:04:06.483378   52158 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0205 03:04:06.484701   52158 out.go:177] * Done! kubectl is now configured to use "test-preload-375673" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 05 03:04:07 test-preload-375673 crio[677]: time="2025-02-05 03:04:07.326937947Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738724647326875835,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2f39dd14-1aa6-498e-b99c-f93237e9a108 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 03:04:07 test-preload-375673 crio[677]: time="2025-02-05 03:04:07.327712347Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=50c118b9-065a-4a9f-b48d-b8ae75d2b603 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:04:07 test-preload-375673 crio[677]: time="2025-02-05 03:04:07.327762719Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=50c118b9-065a-4a9f-b48d-b8ae75d2b603 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:04:07 test-preload-375673 crio[677]: time="2025-02-05 03:04:07.327991051Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:90e56a546e709f10282b8831a4901a39bc4ad3b988aa86a9a4ac85597580f84c,PodSandboxId:c338c236cbdca84178a89b5c76cdb8b326b39c3bfefbf15f5f93102a8ed5b2b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1738724640113845529,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-fbjgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 008c2ab9-feeb-4aa4-a4bf-a9cd25eb386a,},Annotations:map[string]string{io.kubernetes.container.hash: 2cd15e4b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0610cd7db68b0842be124bae1daeb905ac68760e7a585992aa53f66529b8b4c4,PodSandboxId:afdd556be0bb05f6d6be1d4a918731f95d3acba560207a87ec3a2e4a98ef49c4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1738724633033181261,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 4274ff6e-b8f0-4d7b-b97f-000aabec7f1b,},Annotations:map[string]string{io.kubernetes.container.hash: d0c71ba7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0217b208c98e41038efb1b006cf10971748370d90e6c573dd9d822805d8b363c,PodSandboxId:e917a910383d3d3274d61f0be3f0027ae450a9dc76f2f20d98ee2634a7e1233b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1738724632693525896,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lm8m5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62
81cff6-09fd-4137-bc0c-62b443c5ca40,},Annotations:map[string]string{io.kubernetes.container.hash: 2794469,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2488994361b162ec97fef36762327c597bc8e572b2b7ed30ebf41431df445f9b,PodSandboxId:5b5808f1a2fe3188e500907efded2735a54b341c9f9bcc7ba55d83dd601f2dce,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1738724627728509963,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-375673,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9df2376686
a285ed3bf517d66240ff61,},Annotations:map[string]string{io.kubernetes.container.hash: 8dbd81f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad10867c39a606614a5e35abb6bab6ef5ea1a9e0d84e6623adb0f16c324e6da3,PodSandboxId:5f48a3c8cce17189a788852af1d2739978f91c3a8e284416c704fb87a4a5e5c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1738724627733621453,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-375673,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ebad6401eecc43ee68db9539f7a3dc7,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 8c47c7ab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ee11406777f17303d44696d43ab2b8d7fb23ac54e59d2f0cd8b768285b0d368,PodSandboxId:c277464cd4fdfaa9863480f60d42a2a60d89d39e61c0e83c03f35317335f80b6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1738724627708826260,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-375673,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6423e7ee4e0f06b774b08fded572df99,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e98b04ce56b0103530c2f2dc018e67f253f79ab6b26aaa5b2de26f3ca7d6142e,PodSandboxId:dcb48e3a8c8bdb6c2f239437e9d091a3869c96f5fdd45469b1cd8d0c2ce67411,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1738724627669949859,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-375673,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26bd007cc9dcaac924eae2f7839cf643,},Annotations:
map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=50c118b9-065a-4a9f-b48d-b8ae75d2b603 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:04:07 test-preload-375673 crio[677]: time="2025-02-05 03:04:07.365320122Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0b5b0bec-c0a7-437e-ab96-299354df0290 name=/runtime.v1.RuntimeService/Version
	Feb 05 03:04:07 test-preload-375673 crio[677]: time="2025-02-05 03:04:07.365389509Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0b5b0bec-c0a7-437e-ab96-299354df0290 name=/runtime.v1.RuntimeService/Version
	Feb 05 03:04:07 test-preload-375673 crio[677]: time="2025-02-05 03:04:07.366226660Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=db264115-01ee-4a68-a8a4-e9d7cc540543 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 03:04:07 test-preload-375673 crio[677]: time="2025-02-05 03:04:07.366634513Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738724647366613156,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=db264115-01ee-4a68-a8a4-e9d7cc540543 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 03:04:07 test-preload-375673 crio[677]: time="2025-02-05 03:04:07.367128847Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6b790546-c9c8-48a0-ad19-a50059288047 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:04:07 test-preload-375673 crio[677]: time="2025-02-05 03:04:07.367174163Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6b790546-c9c8-48a0-ad19-a50059288047 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:04:07 test-preload-375673 crio[677]: time="2025-02-05 03:04:07.367335769Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:90e56a546e709f10282b8831a4901a39bc4ad3b988aa86a9a4ac85597580f84c,PodSandboxId:c338c236cbdca84178a89b5c76cdb8b326b39c3bfefbf15f5f93102a8ed5b2b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1738724640113845529,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-fbjgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 008c2ab9-feeb-4aa4-a4bf-a9cd25eb386a,},Annotations:map[string]string{io.kubernetes.container.hash: 2cd15e4b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0610cd7db68b0842be124bae1daeb905ac68760e7a585992aa53f66529b8b4c4,PodSandboxId:afdd556be0bb05f6d6be1d4a918731f95d3acba560207a87ec3a2e4a98ef49c4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1738724633033181261,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 4274ff6e-b8f0-4d7b-b97f-000aabec7f1b,},Annotations:map[string]string{io.kubernetes.container.hash: d0c71ba7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0217b208c98e41038efb1b006cf10971748370d90e6c573dd9d822805d8b363c,PodSandboxId:e917a910383d3d3274d61f0be3f0027ae450a9dc76f2f20d98ee2634a7e1233b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1738724632693525896,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lm8m5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62
81cff6-09fd-4137-bc0c-62b443c5ca40,},Annotations:map[string]string{io.kubernetes.container.hash: 2794469,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2488994361b162ec97fef36762327c597bc8e572b2b7ed30ebf41431df445f9b,PodSandboxId:5b5808f1a2fe3188e500907efded2735a54b341c9f9bcc7ba55d83dd601f2dce,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1738724627728509963,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-375673,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9df2376686
a285ed3bf517d66240ff61,},Annotations:map[string]string{io.kubernetes.container.hash: 8dbd81f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad10867c39a606614a5e35abb6bab6ef5ea1a9e0d84e6623adb0f16c324e6da3,PodSandboxId:5f48a3c8cce17189a788852af1d2739978f91c3a8e284416c704fb87a4a5e5c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1738724627733621453,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-375673,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ebad6401eecc43ee68db9539f7a3dc7,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 8c47c7ab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ee11406777f17303d44696d43ab2b8d7fb23ac54e59d2f0cd8b768285b0d368,PodSandboxId:c277464cd4fdfaa9863480f60d42a2a60d89d39e61c0e83c03f35317335f80b6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1738724627708826260,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-375673,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6423e7ee4e0f06b774b08fded572df99,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e98b04ce56b0103530c2f2dc018e67f253f79ab6b26aaa5b2de26f3ca7d6142e,PodSandboxId:dcb48e3a8c8bdb6c2f239437e9d091a3869c96f5fdd45469b1cd8d0c2ce67411,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1738724627669949859,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-375673,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26bd007cc9dcaac924eae2f7839cf643,},Annotations:
map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6b790546-c9c8-48a0-ad19-a50059288047 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:04:07 test-preload-375673 crio[677]: time="2025-02-05 03:04:07.402657215Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3d116ddf-0aba-4fee-afae-2a54acf5f7e0 name=/runtime.v1.RuntimeService/Version
	Feb 05 03:04:07 test-preload-375673 crio[677]: time="2025-02-05 03:04:07.402726477Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3d116ddf-0aba-4fee-afae-2a54acf5f7e0 name=/runtime.v1.RuntimeService/Version
	Feb 05 03:04:07 test-preload-375673 crio[677]: time="2025-02-05 03:04:07.404241756Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4a90deac-f29c-4514-931a-48352b44c362 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 03:04:07 test-preload-375673 crio[677]: time="2025-02-05 03:04:07.404649433Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738724647404627923,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4a90deac-f29c-4514-931a-48352b44c362 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 03:04:07 test-preload-375673 crio[677]: time="2025-02-05 03:04:07.405220987Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=43fe1082-867d-44a2-9808-db00c321b8b7 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:04:07 test-preload-375673 crio[677]: time="2025-02-05 03:04:07.405298113Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=43fe1082-867d-44a2-9808-db00c321b8b7 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:04:07 test-preload-375673 crio[677]: time="2025-02-05 03:04:07.405459216Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:90e56a546e709f10282b8831a4901a39bc4ad3b988aa86a9a4ac85597580f84c,PodSandboxId:c338c236cbdca84178a89b5c76cdb8b326b39c3bfefbf15f5f93102a8ed5b2b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1738724640113845529,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-fbjgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 008c2ab9-feeb-4aa4-a4bf-a9cd25eb386a,},Annotations:map[string]string{io.kubernetes.container.hash: 2cd15e4b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0610cd7db68b0842be124bae1daeb905ac68760e7a585992aa53f66529b8b4c4,PodSandboxId:afdd556be0bb05f6d6be1d4a918731f95d3acba560207a87ec3a2e4a98ef49c4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1738724633033181261,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 4274ff6e-b8f0-4d7b-b97f-000aabec7f1b,},Annotations:map[string]string{io.kubernetes.container.hash: d0c71ba7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0217b208c98e41038efb1b006cf10971748370d90e6c573dd9d822805d8b363c,PodSandboxId:e917a910383d3d3274d61f0be3f0027ae450a9dc76f2f20d98ee2634a7e1233b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1738724632693525896,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lm8m5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62
81cff6-09fd-4137-bc0c-62b443c5ca40,},Annotations:map[string]string{io.kubernetes.container.hash: 2794469,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2488994361b162ec97fef36762327c597bc8e572b2b7ed30ebf41431df445f9b,PodSandboxId:5b5808f1a2fe3188e500907efded2735a54b341c9f9bcc7ba55d83dd601f2dce,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1738724627728509963,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-375673,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9df2376686
a285ed3bf517d66240ff61,},Annotations:map[string]string{io.kubernetes.container.hash: 8dbd81f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad10867c39a606614a5e35abb6bab6ef5ea1a9e0d84e6623adb0f16c324e6da3,PodSandboxId:5f48a3c8cce17189a788852af1d2739978f91c3a8e284416c704fb87a4a5e5c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1738724627733621453,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-375673,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ebad6401eecc43ee68db9539f7a3dc7,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 8c47c7ab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ee11406777f17303d44696d43ab2b8d7fb23ac54e59d2f0cd8b768285b0d368,PodSandboxId:c277464cd4fdfaa9863480f60d42a2a60d89d39e61c0e83c03f35317335f80b6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1738724627708826260,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-375673,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6423e7ee4e0f06b774b08fded572df99,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e98b04ce56b0103530c2f2dc018e67f253f79ab6b26aaa5b2de26f3ca7d6142e,PodSandboxId:dcb48e3a8c8bdb6c2f239437e9d091a3869c96f5fdd45469b1cd8d0c2ce67411,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1738724627669949859,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-375673,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26bd007cc9dcaac924eae2f7839cf643,},Annotations:
map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=43fe1082-867d-44a2-9808-db00c321b8b7 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:04:07 test-preload-375673 crio[677]: time="2025-02-05 03:04:07.437024994Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1dc3b3e0-b1d0-471b-ae81-392645feb540 name=/runtime.v1.RuntimeService/Version
	Feb 05 03:04:07 test-preload-375673 crio[677]: time="2025-02-05 03:04:07.437135828Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1dc3b3e0-b1d0-471b-ae81-392645feb540 name=/runtime.v1.RuntimeService/Version
	Feb 05 03:04:07 test-preload-375673 crio[677]: time="2025-02-05 03:04:07.437976073Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c8022cd5-1aca-4e18-affa-61c8a8fdf6f9 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 03:04:07 test-preload-375673 crio[677]: time="2025-02-05 03:04:07.438406367Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738724647438383894,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c8022cd5-1aca-4e18-affa-61c8a8fdf6f9 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 03:04:07 test-preload-375673 crio[677]: time="2025-02-05 03:04:07.439025271Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7c62ad4b-c5df-44fa-8105-b58b2b360a59 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:04:07 test-preload-375673 crio[677]: time="2025-02-05 03:04:07.439077319Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7c62ad4b-c5df-44fa-8105-b58b2b360a59 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:04:07 test-preload-375673 crio[677]: time="2025-02-05 03:04:07.439225337Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:90e56a546e709f10282b8831a4901a39bc4ad3b988aa86a9a4ac85597580f84c,PodSandboxId:c338c236cbdca84178a89b5c76cdb8b326b39c3bfefbf15f5f93102a8ed5b2b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1738724640113845529,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-fbjgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 008c2ab9-feeb-4aa4-a4bf-a9cd25eb386a,},Annotations:map[string]string{io.kubernetes.container.hash: 2cd15e4b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0610cd7db68b0842be124bae1daeb905ac68760e7a585992aa53f66529b8b4c4,PodSandboxId:afdd556be0bb05f6d6be1d4a918731f95d3acba560207a87ec3a2e4a98ef49c4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1738724633033181261,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 4274ff6e-b8f0-4d7b-b97f-000aabec7f1b,},Annotations:map[string]string{io.kubernetes.container.hash: d0c71ba7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0217b208c98e41038efb1b006cf10971748370d90e6c573dd9d822805d8b363c,PodSandboxId:e917a910383d3d3274d61f0be3f0027ae450a9dc76f2f20d98ee2634a7e1233b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1738724632693525896,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lm8m5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62
81cff6-09fd-4137-bc0c-62b443c5ca40,},Annotations:map[string]string{io.kubernetes.container.hash: 2794469,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2488994361b162ec97fef36762327c597bc8e572b2b7ed30ebf41431df445f9b,PodSandboxId:5b5808f1a2fe3188e500907efded2735a54b341c9f9bcc7ba55d83dd601f2dce,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1738724627728509963,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-375673,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9df2376686
a285ed3bf517d66240ff61,},Annotations:map[string]string{io.kubernetes.container.hash: 8dbd81f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad10867c39a606614a5e35abb6bab6ef5ea1a9e0d84e6623adb0f16c324e6da3,PodSandboxId:5f48a3c8cce17189a788852af1d2739978f91c3a8e284416c704fb87a4a5e5c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1738724627733621453,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-375673,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ebad6401eecc43ee68db9539f7a3dc7,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 8c47c7ab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ee11406777f17303d44696d43ab2b8d7fb23ac54e59d2f0cd8b768285b0d368,PodSandboxId:c277464cd4fdfaa9863480f60d42a2a60d89d39e61c0e83c03f35317335f80b6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1738724627708826260,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-375673,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6423e7ee4e0f06b774b08fded572df99,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e98b04ce56b0103530c2f2dc018e67f253f79ab6b26aaa5b2de26f3ca7d6142e,PodSandboxId:dcb48e3a8c8bdb6c2f239437e9d091a3869c96f5fdd45469b1cd8d0c2ce67411,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1738724627669949859,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-375673,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26bd007cc9dcaac924eae2f7839cf643,},Annotations:
map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7c62ad4b-c5df-44fa-8105-b58b2b360a59 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	90e56a546e709       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   7 seconds ago       Running             coredns                   1                   c338c236cbdca       coredns-6d4b75cb6d-fbjgg
	0610cd7db68b0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Running             storage-provisioner       1                   afdd556be0bb0       storage-provisioner
	0217b208c98e4       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   14 seconds ago      Running             kube-proxy                1                   e917a910383d3       kube-proxy-lm8m5
	ad10867c39a60       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   19 seconds ago      Running             etcd                      1                   5f48a3c8cce17       etcd-test-preload-375673
	2488994361b16       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   19 seconds ago      Running             kube-apiserver            1                   5b5808f1a2fe3       kube-apiserver-test-preload-375673
	7ee11406777f1       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   19 seconds ago      Running             kube-scheduler            1                   c277464cd4fdf       kube-scheduler-test-preload-375673
	e98b04ce56b01       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   19 seconds ago      Running             kube-controller-manager   1                   dcb48e3a8c8bd       kube-controller-manager-test-preload-375673
	
	
	==> coredns [90e56a546e709f10282b8831a4901a39bc4ad3b988aa86a9a4ac85597580f84c] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:59409 - 1428 "HINFO IN 2887585455800588832.4216092329915227669. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019792674s
	
	
	==> describe nodes <==
	Name:               test-preload-375673
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-375673
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e14c106b6cd223b80b8f7425af26284a33f75c3d
	                    minikube.k8s.io/name=test-preload-375673
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_02_05T03_00_35_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 05 Feb 2025 03:00:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-375673
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 05 Feb 2025 03:04:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 05 Feb 2025 03:04:01 +0000   Wed, 05 Feb 2025 03:00:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 05 Feb 2025 03:04:01 +0000   Wed, 05 Feb 2025 03:00:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 05 Feb 2025 03:04:01 +0000   Wed, 05 Feb 2025 03:00:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 05 Feb 2025 03:04:01 +0000   Wed, 05 Feb 2025 03:04:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.150
	  Hostname:    test-preload-375673
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8c2d5c34179049bfa04fa55a106e58da
	  System UUID:                8c2d5c34-1790-49bf-a04f-a55a106e58da
	  Boot ID:                    205930fa-1732-4659-a246-d87a88ca94f9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-fbjgg                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     3m19s
	  kube-system                 etcd-test-preload-375673                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         3m31s
	  kube-system                 kube-apiserver-test-preload-375673             250m (12%)    0 (0%)      0 (0%)           0 (0%)         3m32s
	  kube-system                 kube-controller-manager-test-preload-375673    200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m32s
	  kube-system                 kube-proxy-lm8m5                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m19s
	  kube-system                 kube-scheduler-test-preload-375673             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m32s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14s                kube-proxy       
	  Normal  Starting                 3m17s              kube-proxy       
	  Normal  Starting                 3m32s              kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  3m32s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m32s              kubelet          Node test-preload-375673 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m32s              kubelet          Node test-preload-375673 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m32s              kubelet          Node test-preload-375673 status is now: NodeHasSufficientPID
	  Normal  NodeReady                3m21s              kubelet          Node test-preload-375673 status is now: NodeReady
	  Normal  RegisteredNode           3m19s              node-controller  Node test-preload-375673 event: Registered Node test-preload-375673 in Controller
	  Normal  Starting                 21s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20s (x8 over 20s)  kubelet          Node test-preload-375673 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20s (x8 over 20s)  kubelet          Node test-preload-375673 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20s (x7 over 20s)  kubelet          Node test-preload-375673 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3s                 node-controller  Node test-preload-375673 event: Registered Node test-preload-375673 in Controller
	
	
	==> dmesg <==
	[Feb 5 03:03] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052791] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038660] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.869025] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.008025] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.568995] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000000] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.838299] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.064742] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057705] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.159839] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.149476] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.236500] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[ +12.873859] systemd-fstab-generator[993]: Ignoring "noauto" option for root device
	[  +0.059703] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.783371] systemd-fstab-generator[1120]: Ignoring "noauto" option for root device
	[  +5.178357] kauditd_printk_skb: 105 callbacks suppressed
	[  +2.403933] systemd-fstab-generator[1766]: Ignoring "noauto" option for root device
	[  +5.534837] kauditd_printk_skb: 53 callbacks suppressed
	[Feb 5 03:04] kauditd_printk_skb: 11 callbacks suppressed
	
	
	==> etcd [ad10867c39a606614a5e35abb6bab6ef5ea1a9e0d84e6623adb0f16c324e6da3] <==
	{"level":"info","ts":"2025-02-05T03:03:48.093Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"2236e2deb63504cb","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2025-02-05T03:03:48.097Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-02-05T03:03:48.097Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2236e2deb63504cb switched to configuration voters=(2465407292199470283)"}
	{"level":"info","ts":"2025-02-05T03:03:48.103Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d5d2d7cf60dc9e96","local-member-id":"2236e2deb63504cb","added-peer-id":"2236e2deb63504cb","added-peer-peer-urls":["https://192.168.39.150:2380"]}
	{"level":"info","ts":"2025-02-05T03:03:48.105Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d5d2d7cf60dc9e96","local-member-id":"2236e2deb63504cb","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-05T03:03:48.105Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-05T03:03:48.103Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-02-05T03:03:48.108Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"2236e2deb63504cb","initial-advertise-peer-urls":["https://192.168.39.150:2380"],"listen-peer-urls":["https://192.168.39.150:2380"],"advertise-client-urls":["https://192.168.39.150:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.150:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-02-05T03:03:48.108Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-02-05T03:03:48.103Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.150:2380"}
	{"level":"info","ts":"2025-02-05T03:03:48.108Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.150:2380"}
	{"level":"info","ts":"2025-02-05T03:03:49.061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2236e2deb63504cb is starting a new election at term 2"}
	{"level":"info","ts":"2025-02-05T03:03:49.061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2236e2deb63504cb became pre-candidate at term 2"}
	{"level":"info","ts":"2025-02-05T03:03:49.061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2236e2deb63504cb received MsgPreVoteResp from 2236e2deb63504cb at term 2"}
	{"level":"info","ts":"2025-02-05T03:03:49.061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2236e2deb63504cb became candidate at term 3"}
	{"level":"info","ts":"2025-02-05T03:03:49.061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2236e2deb63504cb received MsgVoteResp from 2236e2deb63504cb at term 3"}
	{"level":"info","ts":"2025-02-05T03:03:49.061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2236e2deb63504cb became leader at term 3"}
	{"level":"info","ts":"2025-02-05T03:03:49.061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 2236e2deb63504cb elected leader 2236e2deb63504cb at term 3"}
	{"level":"info","ts":"2025-02-05T03:03:49.065Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"2236e2deb63504cb","local-member-attributes":"{Name:test-preload-375673 ClientURLs:[https://192.168.39.150:2379]}","request-path":"/0/members/2236e2deb63504cb/attributes","cluster-id":"d5d2d7cf60dc9e96","publish-timeout":"7s"}
	{"level":"info","ts":"2025-02-05T03:03:49.065Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-05T03:03:49.067Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-05T03:03:49.073Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-02-05T03:03:49.080Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.150:2379"}
	{"level":"info","ts":"2025-02-05T03:03:49.085Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-02-05T03:03:49.085Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 03:04:07 up 0 min,  0 users,  load average: 0.75, 0.23, 0.08
	Linux test-preload-375673 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2488994361b162ec97fef36762327c597bc8e572b2b7ed30ebf41431df445f9b] <==
	I0205 03:03:51.706124       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0205 03:03:51.706184       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0205 03:03:51.712698       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0205 03:03:51.713114       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0205 03:03:51.761114       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0205 03:03:51.761227       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	E0205 03:03:51.824872       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0205 03:03:51.859471       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0205 03:03:51.861714       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0205 03:03:51.878067       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0205 03:03:51.899838       1 cache.go:39] Caches are synced for autoregister controller
	I0205 03:03:51.900038       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0205 03:03:51.901584       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0205 03:03:51.902601       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0205 03:03:51.906142       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0205 03:03:52.388190       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0205 03:03:52.706554       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0205 03:03:53.148325       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0205 03:03:53.259745       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0205 03:03:53.278521       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0205 03:03:53.321177       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0205 03:03:53.340168       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0205 03:03:53.349177       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0205 03:04:04.736612       1 controller.go:611] quota admission added evaluator for: endpoints
	I0205 03:04:04.886728       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [e98b04ce56b0103530c2f2dc018e67f253f79ab6b26aaa5b2de26f3ca7d6142e] <==
	I0205 03:04:04.732966       1 shared_informer.go:262] Caches are synced for GC
	I0205 03:04:04.732983       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0205 03:04:04.734224       1 shared_informer.go:262] Caches are synced for deployment
	I0205 03:04:04.734342       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0205 03:04:04.734426       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0205 03:04:04.734516       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I0205 03:04:04.734229       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0205 03:04:04.746100       1 shared_informer.go:262] Caches are synced for node
	I0205 03:04:04.746176       1 range_allocator.go:173] Starting range CIDR allocator
	I0205 03:04:04.746195       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0205 03:04:04.746214       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0205 03:04:04.748504       1 shared_informer.go:262] Caches are synced for crt configmap
	I0205 03:04:04.757058       1 shared_informer.go:262] Caches are synced for job
	I0205 03:04:04.767362       1 shared_informer.go:262] Caches are synced for ephemeral
	I0205 03:04:04.783655       1 shared_informer.go:262] Caches are synced for attach detach
	I0205 03:04:04.882973       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0205 03:04:04.910482       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0205 03:04:04.933255       1 shared_informer.go:262] Caches are synced for disruption
	I0205 03:04:04.933279       1 disruption.go:371] Sending events to api server.
	I0205 03:04:04.940184       1 shared_informer.go:262] Caches are synced for stateful set
	I0205 03:04:04.942437       1 shared_informer.go:262] Caches are synced for resource quota
	I0205 03:04:04.949077       1 shared_informer.go:262] Caches are synced for resource quota
	I0205 03:04:05.355164       1 shared_informer.go:262] Caches are synced for garbage collector
	I0205 03:04:05.355239       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0205 03:04:05.373288       1 shared_informer.go:262] Caches are synced for garbage collector
	
	
	==> kube-proxy [0217b208c98e41038efb1b006cf10971748370d90e6c573dd9d822805d8b363c] <==
	I0205 03:03:53.046403       1 node.go:163] Successfully retrieved node IP: 192.168.39.150
	I0205 03:03:53.046503       1 server_others.go:138] "Detected node IP" address="192.168.39.150"
	I0205 03:03:53.046539       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0205 03:03:53.125607       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0205 03:03:53.125636       1 server_others.go:206] "Using iptables Proxier"
	I0205 03:03:53.126362       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0205 03:03:53.126691       1 server.go:661] "Version info" version="v1.24.4"
	I0205 03:03:53.126716       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0205 03:03:53.131110       1 config.go:317] "Starting service config controller"
	I0205 03:03:53.131221       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0205 03:03:53.131253       1 config.go:226] "Starting endpoint slice config controller"
	I0205 03:03:53.131257       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0205 03:03:53.131968       1 config.go:444] "Starting node config controller"
	I0205 03:03:53.131990       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0205 03:03:53.232437       1 shared_informer.go:262] Caches are synced for node config
	I0205 03:03:53.232494       1 shared_informer.go:262] Caches are synced for service config
	I0205 03:03:53.232517       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [7ee11406777f17303d44696d43ab2b8d7fb23ac54e59d2f0cd8b768285b0d368] <==
	I0205 03:03:49.411728       1 serving.go:348] Generated self-signed cert in-memory
	W0205 03:03:51.727854       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0205 03:03:51.727930       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0205 03:03:51.727945       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0205 03:03:51.727952       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0205 03:03:51.782151       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0205 03:03:51.782235       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0205 03:03:51.785247       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0205 03:03:51.786972       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0205 03:03:51.792874       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	W0205 03:03:51.806712       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0205 03:03:51.806758       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0205 03:03:51.806777       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0205 03:03:51.806783       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0205 03:03:51.786997       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0205 03:03:51.893772       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 05 03:03:51 test-preload-375673 kubelet[1127]: I0205 03:03:51.997437    1127 apiserver.go:52] "Watching apiserver"
	Feb 05 03:03:52 test-preload-375673 kubelet[1127]: I0205 03:03:52.001108    1127 topology_manager.go:200] "Topology Admit Handler"
	Feb 05 03:03:52 test-preload-375673 kubelet[1127]: I0205 03:03:52.001226    1127 topology_manager.go:200] "Topology Admit Handler"
	Feb 05 03:03:52 test-preload-375673 kubelet[1127]: I0205 03:03:52.001263    1127 topology_manager.go:200] "Topology Admit Handler"
	Feb 05 03:03:52 test-preload-375673 kubelet[1127]: E0205 03:03:52.004834    1127 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-fbjgg" podUID=008c2ab9-feeb-4aa4-a4bf-a9cd25eb386a
	Feb 05 03:03:52 test-preload-375673 kubelet[1127]: I0205 03:03:52.045133    1127 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxx29\" (UniqueName: \"kubernetes.io/projected/4274ff6e-b8f0-4d7b-b97f-000aabec7f1b-kube-api-access-nxx29\") pod \"storage-provisioner\" (UID: \"4274ff6e-b8f0-4d7b-b97f-000aabec7f1b\") " pod="kube-system/storage-provisioner"
	Feb 05 03:03:52 test-preload-375673 kubelet[1127]: I0205 03:03:52.045241    1127 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6281cff6-09fd-4137-bc0c-62b443c5ca40-lib-modules\") pod \"kube-proxy-lm8m5\" (UID: \"6281cff6-09fd-4137-bc0c-62b443c5ca40\") " pod="kube-system/kube-proxy-lm8m5"
	Feb 05 03:03:52 test-preload-375673 kubelet[1127]: I0205 03:03:52.045263    1127 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4274ff6e-b8f0-4d7b-b97f-000aabec7f1b-tmp\") pod \"storage-provisioner\" (UID: \"4274ff6e-b8f0-4d7b-b97f-000aabec7f1b\") " pod="kube-system/storage-provisioner"
	Feb 05 03:03:52 test-preload-375673 kubelet[1127]: I0205 03:03:52.045431    1127 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmzrh\" (UniqueName: \"kubernetes.io/projected/008c2ab9-feeb-4aa4-a4bf-a9cd25eb386a-kube-api-access-vmzrh\") pod \"coredns-6d4b75cb6d-fbjgg\" (UID: \"008c2ab9-feeb-4aa4-a4bf-a9cd25eb386a\") " pod="kube-system/coredns-6d4b75cb6d-fbjgg"
	Feb 05 03:03:52 test-preload-375673 kubelet[1127]: I0205 03:03:52.045456    1127 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6281cff6-09fd-4137-bc0c-62b443c5ca40-kube-proxy\") pod \"kube-proxy-lm8m5\" (UID: \"6281cff6-09fd-4137-bc0c-62b443c5ca40\") " pod="kube-system/kube-proxy-lm8m5"
	Feb 05 03:03:52 test-preload-375673 kubelet[1127]: I0205 03:03:52.045541    1127 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6281cff6-09fd-4137-bc0c-62b443c5ca40-xtables-lock\") pod \"kube-proxy-lm8m5\" (UID: \"6281cff6-09fd-4137-bc0c-62b443c5ca40\") " pod="kube-system/kube-proxy-lm8m5"
	Feb 05 03:03:52 test-preload-375673 kubelet[1127]: I0205 03:03:52.045623    1127 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46hxg\" (UniqueName: \"kubernetes.io/projected/6281cff6-09fd-4137-bc0c-62b443c5ca40-kube-api-access-46hxg\") pod \"kube-proxy-lm8m5\" (UID: \"6281cff6-09fd-4137-bc0c-62b443c5ca40\") " pod="kube-system/kube-proxy-lm8m5"
	Feb 05 03:03:52 test-preload-375673 kubelet[1127]: I0205 03:03:52.045714    1127 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/008c2ab9-feeb-4aa4-a4bf-a9cd25eb386a-config-volume\") pod \"coredns-6d4b75cb6d-fbjgg\" (UID: \"008c2ab9-feeb-4aa4-a4bf-a9cd25eb386a\") " pod="kube-system/coredns-6d4b75cb6d-fbjgg"
	Feb 05 03:03:52 test-preload-375673 kubelet[1127]: I0205 03:03:52.045749    1127 reconciler.go:159] "Reconciler: start to sync state"
	Feb 05 03:03:52 test-preload-375673 kubelet[1127]: E0205 03:03:52.047565    1127 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Feb 05 03:03:52 test-preload-375673 kubelet[1127]: E0205 03:03:52.148514    1127 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Feb 05 03:03:52 test-preload-375673 kubelet[1127]: E0205 03:03:52.148630    1127 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/008c2ab9-feeb-4aa4-a4bf-a9cd25eb386a-config-volume podName:008c2ab9-feeb-4aa4-a4bf-a9cd25eb386a nodeName:}" failed. No retries permitted until 2025-02-05 03:03:52.648591868 +0000 UTC m=+5.783473308 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/008c2ab9-feeb-4aa4-a4bf-a9cd25eb386a-config-volume") pod "coredns-6d4b75cb6d-fbjgg" (UID: "008c2ab9-feeb-4aa4-a4bf-a9cd25eb386a") : object "kube-system"/"coredns" not registered
	Feb 05 03:03:52 test-preload-375673 kubelet[1127]: E0205 03:03:52.653348    1127 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Feb 05 03:03:52 test-preload-375673 kubelet[1127]: E0205 03:03:52.653429    1127 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/008c2ab9-feeb-4aa4-a4bf-a9cd25eb386a-config-volume podName:008c2ab9-feeb-4aa4-a4bf-a9cd25eb386a nodeName:}" failed. No retries permitted until 2025-02-05 03:03:53.65341407 +0000 UTC m=+6.788295493 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/008c2ab9-feeb-4aa4-a4bf-a9cd25eb386a-config-volume") pod "coredns-6d4b75cb6d-fbjgg" (UID: "008c2ab9-feeb-4aa4-a4bf-a9cd25eb386a") : object "kube-system"/"coredns" not registered
	Feb 05 03:03:53 test-preload-375673 kubelet[1127]: E0205 03:03:53.671837    1127 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Feb 05 03:03:53 test-preload-375673 kubelet[1127]: E0205 03:03:53.671944    1127 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/008c2ab9-feeb-4aa4-a4bf-a9cd25eb386a-config-volume podName:008c2ab9-feeb-4aa4-a4bf-a9cd25eb386a nodeName:}" failed. No retries permitted until 2025-02-05 03:03:55.671928536 +0000 UTC m=+8.806809974 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/008c2ab9-feeb-4aa4-a4bf-a9cd25eb386a-config-volume") pod "coredns-6d4b75cb6d-fbjgg" (UID: "008c2ab9-feeb-4aa4-a4bf-a9cd25eb386a") : object "kube-system"/"coredns" not registered
	Feb 05 03:03:54 test-preload-375673 kubelet[1127]: E0205 03:03:54.089068    1127 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-fbjgg" podUID=008c2ab9-feeb-4aa4-a4bf-a9cd25eb386a
	Feb 05 03:03:55 test-preload-375673 kubelet[1127]: E0205 03:03:55.688360    1127 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Feb 05 03:03:55 test-preload-375673 kubelet[1127]: E0205 03:03:55.688485    1127 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/008c2ab9-feeb-4aa4-a4bf-a9cd25eb386a-config-volume podName:008c2ab9-feeb-4aa4-a4bf-a9cd25eb386a nodeName:}" failed. No retries permitted until 2025-02-05 03:03:59.68846154 +0000 UTC m=+12.823342978 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/008c2ab9-feeb-4aa4-a4bf-a9cd25eb386a-config-volume") pod "coredns-6d4b75cb6d-fbjgg" (UID: "008c2ab9-feeb-4aa4-a4bf-a9cd25eb386a") : object "kube-system"/"coredns" not registered
	Feb 05 03:03:56 test-preload-375673 kubelet[1127]: E0205 03:03:56.089005    1127 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-fbjgg" podUID=008c2ab9-feeb-4aa4-a4bf-a9cd25eb386a
	
	
	==> storage-provisioner [0610cd7db68b0842be124bae1daeb905ac68760e7a585992aa53f66529b8b4c4] <==
	I0205 03:03:53.176352       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-375673 -n test-preload-375673
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-375673 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-375673" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-375673
E0205 03:04:09.273729   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/functional-910650/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-375673: (1.131607442s)
--- FAIL: TestPreload (289.76s)

                                                
                                    
x
+
TestKubernetesUpgrade (1182.08s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-024079 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-024079 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (5m3.881308402s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-024079] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20363
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20363-12788/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20363-12788/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-024079" primary control-plane node in "kubernetes-upgrade-024079" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0205 03:09:09.396845   58877 out.go:345] Setting OutFile to fd 1 ...
	I0205 03:09:09.396990   58877 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 03:09:09.397001   58877 out.go:358] Setting ErrFile to fd 2...
	I0205 03:09:09.397007   58877 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 03:09:09.397361   58877 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20363-12788/.minikube/bin
	I0205 03:09:09.398170   58877 out.go:352] Setting JSON to false
	I0205 03:09:09.399387   58877 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6700,"bootTime":1738718249,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0205 03:09:09.399534   58877 start.go:139] virtualization: kvm guest
	I0205 03:09:09.401774   58877 out.go:177] * [kubernetes-upgrade-024079] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0205 03:09:09.403066   58877 notify.go:220] Checking for updates...
	I0205 03:09:09.403104   58877 out.go:177]   - MINIKUBE_LOCATION=20363
	I0205 03:09:09.404338   58877 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0205 03:09:09.405641   58877 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20363-12788/kubeconfig
	I0205 03:09:09.406813   58877 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20363-12788/.minikube
	I0205 03:09:09.407986   58877 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0205 03:09:09.409009   58877 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0205 03:09:09.410636   58877 config.go:182] Loaded profile config "NoKubernetes-290619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0205 03:09:09.410762   58877 config.go:182] Loaded profile config "cert-expiration-908105": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 03:09:09.410906   58877 config.go:182] Loaded profile config "cert-options-653669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 03:09:09.411037   58877 driver.go:394] Setting default libvirt URI to qemu:///system
	I0205 03:09:09.455012   58877 out.go:177] * Using the kvm2 driver based on user configuration
	I0205 03:09:09.456374   58877 start.go:297] selected driver: kvm2
	I0205 03:09:09.456393   58877 start.go:901] validating driver "kvm2" against <nil>
	I0205 03:09:09.456408   58877 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0205 03:09:09.457580   58877 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0205 03:09:09.457701   58877 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20363-12788/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0205 03:09:09.474316   58877 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0205 03:09:09.474370   58877 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0205 03:09:09.474714   58877 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0205 03:09:09.474746   58877 cni.go:84] Creating CNI manager for ""
	I0205 03:09:09.474807   58877 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0205 03:09:09.474822   58877 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0205 03:09:09.474893   58877 start.go:340] cluster config:
	{Name:kubernetes-upgrade-024079 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-024079 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0205 03:09:09.475023   58877 iso.go:125] acquiring lock: {Name:mk486603e8d6546d81b9e7d0a893261360630790 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0205 03:09:09.476601   58877 out.go:177] * Starting "kubernetes-upgrade-024079" primary control-plane node in "kubernetes-upgrade-024079" cluster
	I0205 03:09:09.477703   58877 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0205 03:09:09.477747   58877 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20363-12788/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0205 03:09:09.477757   58877 cache.go:56] Caching tarball of preloaded images
	I0205 03:09:09.477883   58877 preload.go:172] Found /home/jenkins/minikube-integration/20363-12788/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0205 03:09:09.477896   58877 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0205 03:09:09.478012   58877 profile.go:143] Saving config to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kubernetes-upgrade-024079/config.json ...
	I0205 03:09:09.478039   58877 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kubernetes-upgrade-024079/config.json: {Name:mkcb95cf6c7ca65a193d2ed7cb85a7657c9e0a92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:09:09.478198   58877 start.go:360] acquireMachinesLock for kubernetes-upgrade-024079: {Name:mka859d8706e94e04a549fcebb98cdac86bfe5a9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0205 03:09:42.358968   58877 start.go:364] duration metric: took 32.880713952s to acquireMachinesLock for "kubernetes-upgrade-024079"
	I0205 03:09:42.359037   58877 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-024079 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernete
s-upgrade-024079 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0205 03:09:42.359176   58877 start.go:125] createHost starting for "" (driver="kvm2")
	I0205 03:09:42.362043   58877 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0205 03:09:42.362313   58877 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:09:42.362378   58877 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:09:42.382113   58877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34935
	I0205 03:09:42.382563   58877 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:09:42.383157   58877 main.go:141] libmachine: Using API Version  1
	I0205 03:09:42.383183   58877 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:09:42.383553   58877 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:09:42.383853   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetMachineName
	I0205 03:09:42.384034   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .DriverName
	I0205 03:09:42.384235   58877 start.go:159] libmachine.API.Create for "kubernetes-upgrade-024079" (driver="kvm2")
	I0205 03:09:42.384267   58877 client.go:168] LocalClient.Create starting
	I0205 03:09:42.384310   58877 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem
	I0205 03:09:42.384349   58877 main.go:141] libmachine: Decoding PEM data...
	I0205 03:09:42.384370   58877 main.go:141] libmachine: Parsing certificate...
	I0205 03:09:42.384443   58877 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20363-12788/.minikube/certs/cert.pem
	I0205 03:09:42.384477   58877 main.go:141] libmachine: Decoding PEM data...
	I0205 03:09:42.384490   58877 main.go:141] libmachine: Parsing certificate...
	I0205 03:09:42.384511   58877 main.go:141] libmachine: Running pre-create checks...
	I0205 03:09:42.384525   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .PreCreateCheck
	I0205 03:09:42.384938   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetConfigRaw
	I0205 03:09:42.385410   58877 main.go:141] libmachine: Creating machine...
	I0205 03:09:42.385422   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .Create
	I0205 03:09:42.385584   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) creating KVM machine...
	I0205 03:09:42.385605   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) creating network...
	I0205 03:09:42.386954   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | found existing default KVM network
	I0205 03:09:42.389572   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | I0205 03:09:42.389404   59329 network.go:209] skipping subnet 192.168.39.0/24 that is reserved: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0205 03:09:42.390449   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | I0205 03:09:42.390345   59329 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:5b:a5:a5} reservation:<nil>}
	I0205 03:09:42.391464   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | I0205 03:09:42.391370   59329 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000115cc0}
	I0205 03:09:42.391520   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | created network xml: 
	I0205 03:09:42.391535   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | <network>
	I0205 03:09:42.391552   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG |   <name>mk-kubernetes-upgrade-024079</name>
	I0205 03:09:42.391569   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG |   <dns enable='no'/>
	I0205 03:09:42.391581   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG |   
	I0205 03:09:42.391589   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0205 03:09:42.391601   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG |     <dhcp>
	I0205 03:09:42.391620   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0205 03:09:42.391651   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG |     </dhcp>
	I0205 03:09:42.391682   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG |   </ip>
	I0205 03:09:42.391692   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG |   
	I0205 03:09:42.391699   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | </network>
	I0205 03:09:42.391711   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | 
	I0205 03:09:42.397597   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | trying to create private KVM network mk-kubernetes-upgrade-024079 192.168.61.0/24...
	I0205 03:09:42.485490   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) setting up store path in /home/jenkins/minikube-integration/20363-12788/.minikube/machines/kubernetes-upgrade-024079 ...
	I0205 03:09:42.485520   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | private KVM network mk-kubernetes-upgrade-024079 192.168.61.0/24 created
	I0205 03:09:42.485534   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) building disk image from file:///home/jenkins/minikube-integration/20363-12788/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0205 03:09:42.485558   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Downloading /home/jenkins/minikube-integration/20363-12788/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20363-12788/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0205 03:09:42.485575   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | I0205 03:09:42.482656   59329 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20363-12788/.minikube
	I0205 03:09:42.747999   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | I0205 03:09:42.747839   59329 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20363-12788/.minikube/machines/kubernetes-upgrade-024079/id_rsa...
	I0205 03:09:43.081354   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | I0205 03:09:43.081159   59329 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20363-12788/.minikube/machines/kubernetes-upgrade-024079/kubernetes-upgrade-024079.rawdisk...
	I0205 03:09:43.081393   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | Writing magic tar header
	I0205 03:09:43.081414   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | Writing SSH key tar header
	I0205 03:09:43.081428   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | I0205 03:09:43.081266   59329 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20363-12788/.minikube/machines/kubernetes-upgrade-024079 ...
	I0205 03:09:43.081445   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20363-12788/.minikube/machines/kubernetes-upgrade-024079
	I0205 03:09:43.081464   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20363-12788/.minikube/machines
	I0205 03:09:43.081482   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) setting executable bit set on /home/jenkins/minikube-integration/20363-12788/.minikube/machines/kubernetes-upgrade-024079 (perms=drwx------)
	I0205 03:09:43.081497   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20363-12788/.minikube
	I0205 03:09:43.081510   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20363-12788
	I0205 03:09:43.081522   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0205 03:09:43.081535   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | checking permissions on dir: /home/jenkins
	I0205 03:09:43.081547   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) setting executable bit set on /home/jenkins/minikube-integration/20363-12788/.minikube/machines (perms=drwxr-xr-x)
	I0205 03:09:43.081559   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | checking permissions on dir: /home
	I0205 03:09:43.081572   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | skipping /home - not owner
	I0205 03:09:43.081586   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) setting executable bit set on /home/jenkins/minikube-integration/20363-12788/.minikube (perms=drwxr-xr-x)
	I0205 03:09:43.081599   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) setting executable bit set on /home/jenkins/minikube-integration/20363-12788 (perms=drwxrwxr-x)
	I0205 03:09:43.081615   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0205 03:09:43.081633   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0205 03:09:43.081646   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) creating domain...
	I0205 03:09:43.082576   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) define libvirt domain using xml: 
	I0205 03:09:43.082595   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) <domain type='kvm'>
	I0205 03:09:43.082603   58877 main.go:141] libmachine: (kubernetes-upgrade-024079)   <name>kubernetes-upgrade-024079</name>
	I0205 03:09:43.082608   58877 main.go:141] libmachine: (kubernetes-upgrade-024079)   <memory unit='MiB'>2200</memory>
	I0205 03:09:43.082613   58877 main.go:141] libmachine: (kubernetes-upgrade-024079)   <vcpu>2</vcpu>
	I0205 03:09:43.082617   58877 main.go:141] libmachine: (kubernetes-upgrade-024079)   <features>
	I0205 03:09:43.082623   58877 main.go:141] libmachine: (kubernetes-upgrade-024079)     <acpi/>
	I0205 03:09:43.082630   58877 main.go:141] libmachine: (kubernetes-upgrade-024079)     <apic/>
	I0205 03:09:43.082635   58877 main.go:141] libmachine: (kubernetes-upgrade-024079)     <pae/>
	I0205 03:09:43.082644   58877 main.go:141] libmachine: (kubernetes-upgrade-024079)     
	I0205 03:09:43.082652   58877 main.go:141] libmachine: (kubernetes-upgrade-024079)   </features>
	I0205 03:09:43.082663   58877 main.go:141] libmachine: (kubernetes-upgrade-024079)   <cpu mode='host-passthrough'>
	I0205 03:09:43.082670   58877 main.go:141] libmachine: (kubernetes-upgrade-024079)   
	I0205 03:09:43.082680   58877 main.go:141] libmachine: (kubernetes-upgrade-024079)   </cpu>
	I0205 03:09:43.082697   58877 main.go:141] libmachine: (kubernetes-upgrade-024079)   <os>
	I0205 03:09:43.082704   58877 main.go:141] libmachine: (kubernetes-upgrade-024079)     <type>hvm</type>
	I0205 03:09:43.082710   58877 main.go:141] libmachine: (kubernetes-upgrade-024079)     <boot dev='cdrom'/>
	I0205 03:09:43.082717   58877 main.go:141] libmachine: (kubernetes-upgrade-024079)     <boot dev='hd'/>
	I0205 03:09:43.082722   58877 main.go:141] libmachine: (kubernetes-upgrade-024079)     <bootmenu enable='no'/>
	I0205 03:09:43.082733   58877 main.go:141] libmachine: (kubernetes-upgrade-024079)   </os>
	I0205 03:09:43.082741   58877 main.go:141] libmachine: (kubernetes-upgrade-024079)   <devices>
	I0205 03:09:43.082754   58877 main.go:141] libmachine: (kubernetes-upgrade-024079)     <disk type='file' device='cdrom'>
	I0205 03:09:43.082775   58877 main.go:141] libmachine: (kubernetes-upgrade-024079)       <source file='/home/jenkins/minikube-integration/20363-12788/.minikube/machines/kubernetes-upgrade-024079/boot2docker.iso'/>
	I0205 03:09:43.082785   58877 main.go:141] libmachine: (kubernetes-upgrade-024079)       <target dev='hdc' bus='scsi'/>
	I0205 03:09:43.082790   58877 main.go:141] libmachine: (kubernetes-upgrade-024079)       <readonly/>
	I0205 03:09:43.082796   58877 main.go:141] libmachine: (kubernetes-upgrade-024079)     </disk>
	I0205 03:09:43.082801   58877 main.go:141] libmachine: (kubernetes-upgrade-024079)     <disk type='file' device='disk'>
	I0205 03:09:43.082809   58877 main.go:141] libmachine: (kubernetes-upgrade-024079)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0205 03:09:43.082820   58877 main.go:141] libmachine: (kubernetes-upgrade-024079)       <source file='/home/jenkins/minikube-integration/20363-12788/.minikube/machines/kubernetes-upgrade-024079/kubernetes-upgrade-024079.rawdisk'/>
	I0205 03:09:43.082831   58877 main.go:141] libmachine: (kubernetes-upgrade-024079)       <target dev='hda' bus='virtio'/>
	I0205 03:09:43.082861   58877 main.go:141] libmachine: (kubernetes-upgrade-024079)     </disk>
	I0205 03:09:43.082881   58877 main.go:141] libmachine: (kubernetes-upgrade-024079)     <interface type='network'>
	I0205 03:09:43.082913   58877 main.go:141] libmachine: (kubernetes-upgrade-024079)       <source network='mk-kubernetes-upgrade-024079'/>
	I0205 03:09:43.082936   58877 main.go:141] libmachine: (kubernetes-upgrade-024079)       <model type='virtio'/>
	I0205 03:09:43.082948   58877 main.go:141] libmachine: (kubernetes-upgrade-024079)     </interface>
	I0205 03:09:43.082960   58877 main.go:141] libmachine: (kubernetes-upgrade-024079)     <interface type='network'>
	I0205 03:09:43.082974   58877 main.go:141] libmachine: (kubernetes-upgrade-024079)       <source network='default'/>
	I0205 03:09:43.082985   58877 main.go:141] libmachine: (kubernetes-upgrade-024079)       <model type='virtio'/>
	I0205 03:09:43.082996   58877 main.go:141] libmachine: (kubernetes-upgrade-024079)     </interface>
	I0205 03:09:43.083011   58877 main.go:141] libmachine: (kubernetes-upgrade-024079)     <serial type='pty'>
	I0205 03:09:43.083036   58877 main.go:141] libmachine: (kubernetes-upgrade-024079)       <target port='0'/>
	I0205 03:09:43.083057   58877 main.go:141] libmachine: (kubernetes-upgrade-024079)     </serial>
	I0205 03:09:43.083070   58877 main.go:141] libmachine: (kubernetes-upgrade-024079)     <console type='pty'>
	I0205 03:09:43.083100   58877 main.go:141] libmachine: (kubernetes-upgrade-024079)       <target type='serial' port='0'/>
	I0205 03:09:43.083112   58877 main.go:141] libmachine: (kubernetes-upgrade-024079)     </console>
	I0205 03:09:43.083121   58877 main.go:141] libmachine: (kubernetes-upgrade-024079)     <rng model='virtio'>
	I0205 03:09:43.083135   58877 main.go:141] libmachine: (kubernetes-upgrade-024079)       <backend model='random'>/dev/random</backend>
	I0205 03:09:43.083142   58877 main.go:141] libmachine: (kubernetes-upgrade-024079)     </rng>
	I0205 03:09:43.083149   58877 main.go:141] libmachine: (kubernetes-upgrade-024079)     
	I0205 03:09:43.083163   58877 main.go:141] libmachine: (kubernetes-upgrade-024079)     
	I0205 03:09:43.083175   58877 main.go:141] libmachine: (kubernetes-upgrade-024079)   </devices>
	I0205 03:09:43.083185   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) </domain>
	I0205 03:09:43.083196   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) 
	I0205 03:09:43.087465   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined MAC address 52:54:00:86:6c:5a in network default
	I0205 03:09:43.087989   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) starting domain...
	I0205 03:09:43.088009   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) ensuring networks are active...
	I0205 03:09:43.088018   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:09:43.088734   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Ensuring network default is active
	I0205 03:09:43.089160   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Ensuring network mk-kubernetes-upgrade-024079 is active
	I0205 03:09:43.089737   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) getting domain XML...
	I0205 03:09:43.090524   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) creating domain...
	I0205 03:09:44.331943   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) waiting for IP...
	I0205 03:09:44.332652   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:09:44.333117   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | unable to find current IP address of domain kubernetes-upgrade-024079 in network mk-kubernetes-upgrade-024079
	I0205 03:09:44.333157   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | I0205 03:09:44.333099   59329 retry.go:31] will retry after 209.043022ms: waiting for domain to come up
	I0205 03:09:44.543564   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:09:44.550030   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | unable to find current IP address of domain kubernetes-upgrade-024079 in network mk-kubernetes-upgrade-024079
	I0205 03:09:44.550071   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | I0205 03:09:44.549980   59329 retry.go:31] will retry after 307.71728ms: waiting for domain to come up
	I0205 03:09:44.921417   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:09:44.921919   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | unable to find current IP address of domain kubernetes-upgrade-024079 in network mk-kubernetes-upgrade-024079
	I0205 03:09:44.921948   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | I0205 03:09:44.921893   59329 retry.go:31] will retry after 341.807521ms: waiting for domain to come up
	I0205 03:09:45.265447   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:09:45.265880   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | unable to find current IP address of domain kubernetes-upgrade-024079 in network mk-kubernetes-upgrade-024079
	I0205 03:09:45.265913   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | I0205 03:09:45.265839   59329 retry.go:31] will retry after 431.248467ms: waiting for domain to come up
	I0205 03:09:45.698426   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:09:45.698965   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | unable to find current IP address of domain kubernetes-upgrade-024079 in network mk-kubernetes-upgrade-024079
	I0205 03:09:45.699021   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | I0205 03:09:45.698932   59329 retry.go:31] will retry after 592.76635ms: waiting for domain to come up
	I0205 03:09:46.293761   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:09:46.294229   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | unable to find current IP address of domain kubernetes-upgrade-024079 in network mk-kubernetes-upgrade-024079
	I0205 03:09:46.294249   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | I0205 03:09:46.294186   59329 retry.go:31] will retry after 707.895068ms: waiting for domain to come up
	I0205 03:09:47.004158   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:09:47.004623   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | unable to find current IP address of domain kubernetes-upgrade-024079 in network mk-kubernetes-upgrade-024079
	I0205 03:09:47.004647   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | I0205 03:09:47.004597   59329 retry.go:31] will retry after 1.028757078s: waiting for domain to come up
	I0205 03:09:48.035405   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:09:48.035852   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | unable to find current IP address of domain kubernetes-upgrade-024079 in network mk-kubernetes-upgrade-024079
	I0205 03:09:48.035879   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | I0205 03:09:48.035812   59329 retry.go:31] will retry after 1.13404654s: waiting for domain to come up
	I0205 03:09:49.172118   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:09:49.172589   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | unable to find current IP address of domain kubernetes-upgrade-024079 in network mk-kubernetes-upgrade-024079
	I0205 03:09:49.172613   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | I0205 03:09:49.172557   59329 retry.go:31] will retry after 1.568881167s: waiting for domain to come up
	I0205 03:09:50.743528   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:09:50.744054   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | unable to find current IP address of domain kubernetes-upgrade-024079 in network mk-kubernetes-upgrade-024079
	I0205 03:09:50.744086   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | I0205 03:09:50.743978   59329 retry.go:31] will retry after 2.137632663s: waiting for domain to come up
	I0205 03:09:52.883454   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:09:52.883862   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | unable to find current IP address of domain kubernetes-upgrade-024079 in network mk-kubernetes-upgrade-024079
	I0205 03:09:52.883892   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | I0205 03:09:52.883842   59329 retry.go:31] will retry after 2.185303866s: waiting for domain to come up
	I0205 03:09:55.070381   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:09:55.070871   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | unable to find current IP address of domain kubernetes-upgrade-024079 in network mk-kubernetes-upgrade-024079
	I0205 03:09:55.070893   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | I0205 03:09:55.070849   59329 retry.go:31] will retry after 2.467532483s: waiting for domain to come up
	I0205 03:09:57.539763   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:09:57.540112   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | unable to find current IP address of domain kubernetes-upgrade-024079 in network mk-kubernetes-upgrade-024079
	I0205 03:09:57.540144   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | I0205 03:09:57.540102   59329 retry.go:31] will retry after 4.035181382s: waiting for domain to come up
	I0205 03:10:01.580312   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:10:01.580738   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | unable to find current IP address of domain kubernetes-upgrade-024079 in network mk-kubernetes-upgrade-024079
	I0205 03:10:01.580763   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | I0205 03:10:01.580687   59329 retry.go:31] will retry after 5.557786034s: waiting for domain to come up
	I0205 03:10:07.143687   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:10:07.144149   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) found domain IP: 192.168.61.227
	I0205 03:10:07.144173   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) reserving static IP address...
	I0205 03:10:07.144186   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has current primary IP address 192.168.61.227 and MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:10:07.144585   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-024079", mac: "52:54:00:01:45:f7", ip: "192.168.61.227"} in network mk-kubernetes-upgrade-024079
	I0205 03:10:07.220560   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) reserved static IP address 192.168.61.227 for domain kubernetes-upgrade-024079
	I0205 03:10:07.220598   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) waiting for SSH...
	I0205 03:10:07.220609   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | Getting to WaitForSSH function...
	I0205 03:10:07.223389   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:10:07.223825   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:45:f7", ip: ""} in network mk-kubernetes-upgrade-024079: {Iface:virbr3 ExpiryTime:2025-02-05 04:09:57 +0000 UTC Type:0 Mac:52:54:00:01:45:f7 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:minikube Clientid:01:52:54:00:01:45:f7}
	I0205 03:10:07.223857   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined IP address 192.168.61.227 and MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:10:07.223950   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | Using SSH client type: external
	I0205 03:10:07.223984   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | Using SSH private key: /home/jenkins/minikube-integration/20363-12788/.minikube/machines/kubernetes-upgrade-024079/id_rsa (-rw-------)
	I0205 03:10:07.224044   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.227 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20363-12788/.minikube/machines/kubernetes-upgrade-024079/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0205 03:10:07.224068   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | About to run SSH command:
	I0205 03:10:07.224108   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | exit 0
	I0205 03:10:07.344978   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | SSH cmd err, output: <nil>: 
	I0205 03:10:07.345294   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) KVM machine creation complete
	I0205 03:10:07.345607   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetConfigRaw
	I0205 03:10:07.346181   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .DriverName
	I0205 03:10:07.346351   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .DriverName
	I0205 03:10:07.346466   58877 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0205 03:10:07.346480   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetState
	I0205 03:10:07.347659   58877 main.go:141] libmachine: Detecting operating system of created instance...
	I0205 03:10:07.347671   58877 main.go:141] libmachine: Waiting for SSH to be available...
	I0205 03:10:07.347676   58877 main.go:141] libmachine: Getting to WaitForSSH function...
	I0205 03:10:07.347683   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHHostname
	I0205 03:10:07.349883   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:10:07.350238   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:45:f7", ip: ""} in network mk-kubernetes-upgrade-024079: {Iface:virbr3 ExpiryTime:2025-02-05 04:09:57 +0000 UTC Type:0 Mac:52:54:00:01:45:f7 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:kubernetes-upgrade-024079 Clientid:01:52:54:00:01:45:f7}
	I0205 03:10:07.350263   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined IP address 192.168.61.227 and MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:10:07.350375   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHPort
	I0205 03:10:07.350531   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHKeyPath
	I0205 03:10:07.350685   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHKeyPath
	I0205 03:10:07.350803   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHUsername
	I0205 03:10:07.350949   58877 main.go:141] libmachine: Using SSH client type: native
	I0205 03:10:07.351178   58877 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0205 03:10:07.351205   58877 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0205 03:10:07.448534   58877 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0205 03:10:07.448560   58877 main.go:141] libmachine: Detecting the provisioner...
	I0205 03:10:07.448568   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHHostname
	I0205 03:10:07.451608   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:10:07.451951   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:45:f7", ip: ""} in network mk-kubernetes-upgrade-024079: {Iface:virbr3 ExpiryTime:2025-02-05 04:09:57 +0000 UTC Type:0 Mac:52:54:00:01:45:f7 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:kubernetes-upgrade-024079 Clientid:01:52:54:00:01:45:f7}
	I0205 03:10:07.451993   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined IP address 192.168.61.227 and MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:10:07.452124   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHPort
	I0205 03:10:07.452400   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHKeyPath
	I0205 03:10:07.452585   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHKeyPath
	I0205 03:10:07.452739   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHUsername
	I0205 03:10:07.452900   58877 main.go:141] libmachine: Using SSH client type: native
	I0205 03:10:07.453074   58877 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0205 03:10:07.453084   58877 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0205 03:10:07.550306   58877 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0205 03:10:07.550412   58877 main.go:141] libmachine: found compatible host: buildroot
	I0205 03:10:07.550423   58877 main.go:141] libmachine: Provisioning with buildroot...
	I0205 03:10:07.550433   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetMachineName
	I0205 03:10:07.550683   58877 buildroot.go:166] provisioning hostname "kubernetes-upgrade-024079"
	I0205 03:10:07.550708   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetMachineName
	I0205 03:10:07.550823   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHHostname
	I0205 03:10:07.553393   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:10:07.553759   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:45:f7", ip: ""} in network mk-kubernetes-upgrade-024079: {Iface:virbr3 ExpiryTime:2025-02-05 04:09:57 +0000 UTC Type:0 Mac:52:54:00:01:45:f7 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:kubernetes-upgrade-024079 Clientid:01:52:54:00:01:45:f7}
	I0205 03:10:07.553789   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined IP address 192.168.61.227 and MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:10:07.553900   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHPort
	I0205 03:10:07.554089   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHKeyPath
	I0205 03:10:07.554245   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHKeyPath
	I0205 03:10:07.554407   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHUsername
	I0205 03:10:07.554568   58877 main.go:141] libmachine: Using SSH client type: native
	I0205 03:10:07.554777   58877 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0205 03:10:07.554790   58877 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-024079 && echo "kubernetes-upgrade-024079" | sudo tee /etc/hostname
	I0205 03:10:07.667314   58877 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-024079
	
	I0205 03:10:07.667345   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHHostname
	I0205 03:10:07.670273   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:10:07.670717   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:45:f7", ip: ""} in network mk-kubernetes-upgrade-024079: {Iface:virbr3 ExpiryTime:2025-02-05 04:09:57 +0000 UTC Type:0 Mac:52:54:00:01:45:f7 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:kubernetes-upgrade-024079 Clientid:01:52:54:00:01:45:f7}
	I0205 03:10:07.670765   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined IP address 192.168.61.227 and MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:10:07.670938   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHPort
	I0205 03:10:07.671147   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHKeyPath
	I0205 03:10:07.671319   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHKeyPath
	I0205 03:10:07.671430   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHUsername
	I0205 03:10:07.671625   58877 main.go:141] libmachine: Using SSH client type: native
	I0205 03:10:07.671834   58877 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0205 03:10:07.671851   58877 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-024079' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-024079/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-024079' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0205 03:10:07.777799   58877 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0205 03:10:07.777830   58877 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20363-12788/.minikube CaCertPath:/home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20363-12788/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20363-12788/.minikube}
	I0205 03:10:07.777876   58877 buildroot.go:174] setting up certificates
	I0205 03:10:07.777891   58877 provision.go:84] configureAuth start
	I0205 03:10:07.777910   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetMachineName
	I0205 03:10:07.778200   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetIP
	I0205 03:10:07.780809   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:10:07.781151   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:45:f7", ip: ""} in network mk-kubernetes-upgrade-024079: {Iface:virbr3 ExpiryTime:2025-02-05 04:09:57 +0000 UTC Type:0 Mac:52:54:00:01:45:f7 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:kubernetes-upgrade-024079 Clientid:01:52:54:00:01:45:f7}
	I0205 03:10:07.781185   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined IP address 192.168.61.227 and MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:10:07.781279   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHHostname
	I0205 03:10:07.783519   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:10:07.783852   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:45:f7", ip: ""} in network mk-kubernetes-upgrade-024079: {Iface:virbr3 ExpiryTime:2025-02-05 04:09:57 +0000 UTC Type:0 Mac:52:54:00:01:45:f7 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:kubernetes-upgrade-024079 Clientid:01:52:54:00:01:45:f7}
	I0205 03:10:07.783887   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined IP address 192.168.61.227 and MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:10:07.784008   58877 provision.go:143] copyHostCerts
	I0205 03:10:07.784084   58877 exec_runner.go:144] found /home/jenkins/minikube-integration/20363-12788/.minikube/ca.pem, removing ...
	I0205 03:10:07.784103   58877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20363-12788/.minikube/ca.pem
	I0205 03:10:07.784181   58877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20363-12788/.minikube/ca.pem (1082 bytes)
	I0205 03:10:07.784334   58877 exec_runner.go:144] found /home/jenkins/minikube-integration/20363-12788/.minikube/cert.pem, removing ...
	I0205 03:10:07.784348   58877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20363-12788/.minikube/cert.pem
	I0205 03:10:07.784379   58877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20363-12788/.minikube/cert.pem (1123 bytes)
	I0205 03:10:07.784470   58877 exec_runner.go:144] found /home/jenkins/minikube-integration/20363-12788/.minikube/key.pem, removing ...
	I0205 03:10:07.784478   58877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20363-12788/.minikube/key.pem
	I0205 03:10:07.784500   58877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20363-12788/.minikube/key.pem (1675 bytes)
	I0205 03:10:07.784575   58877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20363-12788/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-024079 san=[127.0.0.1 192.168.61.227 kubernetes-upgrade-024079 localhost minikube]
	I0205 03:10:07.958457   58877 provision.go:177] copyRemoteCerts
	I0205 03:10:07.958520   58877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0205 03:10:07.958547   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHHostname
	I0205 03:10:07.961090   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:10:07.961461   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:45:f7", ip: ""} in network mk-kubernetes-upgrade-024079: {Iface:virbr3 ExpiryTime:2025-02-05 04:09:57 +0000 UTC Type:0 Mac:52:54:00:01:45:f7 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:kubernetes-upgrade-024079 Clientid:01:52:54:00:01:45:f7}
	I0205 03:10:07.961501   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined IP address 192.168.61.227 and MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:10:07.961626   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHPort
	I0205 03:10:07.961829   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHKeyPath
	I0205 03:10:07.961988   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHUsername
	I0205 03:10:07.962132   58877 sshutil.go:53] new ssh client: &{IP:192.168.61.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/kubernetes-upgrade-024079/id_rsa Username:docker}
	I0205 03:10:08.038949   58877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0205 03:10:08.064957   58877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0205 03:10:08.090367   58877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0205 03:10:08.115371   58877 provision.go:87] duration metric: took 337.463352ms to configureAuth
	I0205 03:10:08.115399   58877 buildroot.go:189] setting minikube options for container-runtime
	I0205 03:10:08.115569   58877 config.go:182] Loaded profile config "kubernetes-upgrade-024079": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0205 03:10:08.115657   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHHostname
	I0205 03:10:08.118214   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:10:08.118499   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:45:f7", ip: ""} in network mk-kubernetes-upgrade-024079: {Iface:virbr3 ExpiryTime:2025-02-05 04:09:57 +0000 UTC Type:0 Mac:52:54:00:01:45:f7 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:kubernetes-upgrade-024079 Clientid:01:52:54:00:01:45:f7}
	I0205 03:10:08.118537   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined IP address 192.168.61.227 and MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:10:08.118712   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHPort
	I0205 03:10:08.118887   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHKeyPath
	I0205 03:10:08.119017   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHKeyPath
	I0205 03:10:08.119142   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHUsername
	I0205 03:10:08.119278   58877 main.go:141] libmachine: Using SSH client type: native
	I0205 03:10:08.119494   58877 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0205 03:10:08.119521   58877 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0205 03:10:08.332110   58877 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0205 03:10:08.332159   58877 main.go:141] libmachine: Checking connection to Docker...
	I0205 03:10:08.332173   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetURL
	I0205 03:10:08.333614   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | using libvirt version 6000000
	I0205 03:10:08.336062   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:10:08.336424   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:45:f7", ip: ""} in network mk-kubernetes-upgrade-024079: {Iface:virbr3 ExpiryTime:2025-02-05 04:09:57 +0000 UTC Type:0 Mac:52:54:00:01:45:f7 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:kubernetes-upgrade-024079 Clientid:01:52:54:00:01:45:f7}
	I0205 03:10:08.336450   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined IP address 192.168.61.227 and MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:10:08.336650   58877 main.go:141] libmachine: Docker is up and running!
	I0205 03:10:08.336667   58877 main.go:141] libmachine: Reticulating splines...
	I0205 03:10:08.336683   58877 client.go:171] duration metric: took 25.952399956s to LocalClient.Create
	I0205 03:10:08.336709   58877 start.go:167] duration metric: took 25.952478099s to libmachine.API.Create "kubernetes-upgrade-024079"
	I0205 03:10:08.336718   58877 start.go:293] postStartSetup for "kubernetes-upgrade-024079" (driver="kvm2")
	I0205 03:10:08.336727   58877 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0205 03:10:08.336745   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .DriverName
	I0205 03:10:08.336975   58877 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0205 03:10:08.337005   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHHostname
	I0205 03:10:08.339488   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:10:08.339885   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:45:f7", ip: ""} in network mk-kubernetes-upgrade-024079: {Iface:virbr3 ExpiryTime:2025-02-05 04:09:57 +0000 UTC Type:0 Mac:52:54:00:01:45:f7 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:kubernetes-upgrade-024079 Clientid:01:52:54:00:01:45:f7}
	I0205 03:10:08.339915   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined IP address 192.168.61.227 and MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:10:08.340056   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHPort
	I0205 03:10:08.340236   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHKeyPath
	I0205 03:10:08.340422   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHUsername
	I0205 03:10:08.340599   58877 sshutil.go:53] new ssh client: &{IP:192.168.61.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/kubernetes-upgrade-024079/id_rsa Username:docker}
	I0205 03:10:08.420063   58877 ssh_runner.go:195] Run: cat /etc/os-release
	I0205 03:10:08.424484   58877 info.go:137] Remote host: Buildroot 2023.02.9
	I0205 03:10:08.424513   58877 filesync.go:126] Scanning /home/jenkins/minikube-integration/20363-12788/.minikube/addons for local assets ...
	I0205 03:10:08.424591   58877 filesync.go:126] Scanning /home/jenkins/minikube-integration/20363-12788/.minikube/files for local assets ...
	I0205 03:10:08.424703   58877 filesync.go:149] local asset: /home/jenkins/minikube-integration/20363-12788/.minikube/files/etc/ssl/certs/199892.pem -> 199892.pem in /etc/ssl/certs
	I0205 03:10:08.424812   58877 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0205 03:10:08.433949   58877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/files/etc/ssl/certs/199892.pem --> /etc/ssl/certs/199892.pem (1708 bytes)
	I0205 03:10:08.456537   58877 start.go:296] duration metric: took 119.804637ms for postStartSetup
	I0205 03:10:08.456600   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetConfigRaw
	I0205 03:10:08.457302   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetIP
	I0205 03:10:08.460276   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:10:08.460707   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:45:f7", ip: ""} in network mk-kubernetes-upgrade-024079: {Iface:virbr3 ExpiryTime:2025-02-05 04:09:57 +0000 UTC Type:0 Mac:52:54:00:01:45:f7 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:kubernetes-upgrade-024079 Clientid:01:52:54:00:01:45:f7}
	I0205 03:10:08.460740   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined IP address 192.168.61.227 and MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:10:08.460925   58877 profile.go:143] Saving config to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kubernetes-upgrade-024079/config.json ...
	I0205 03:10:08.461116   58877 start.go:128] duration metric: took 26.101926369s to createHost
	I0205 03:10:08.461138   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHHostname
	I0205 03:10:08.463434   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:10:08.463826   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:45:f7", ip: ""} in network mk-kubernetes-upgrade-024079: {Iface:virbr3 ExpiryTime:2025-02-05 04:09:57 +0000 UTC Type:0 Mac:52:54:00:01:45:f7 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:kubernetes-upgrade-024079 Clientid:01:52:54:00:01:45:f7}
	I0205 03:10:08.463857   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined IP address 192.168.61.227 and MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:10:08.464016   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHPort
	I0205 03:10:08.464201   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHKeyPath
	I0205 03:10:08.464378   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHKeyPath
	I0205 03:10:08.464556   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHUsername
	I0205 03:10:08.464731   58877 main.go:141] libmachine: Using SSH client type: native
	I0205 03:10:08.464915   58877 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0205 03:10:08.464926   58877 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0205 03:10:08.565943   58877 main.go:141] libmachine: SSH cmd err, output: <nil>: 1738725008.524671940
	
	I0205 03:10:08.565969   58877 fix.go:216] guest clock: 1738725008.524671940
	I0205 03:10:08.565977   58877 fix.go:229] Guest: 2025-02-05 03:10:08.52467194 +0000 UTC Remote: 2025-02-05 03:10:08.461127484 +0000 UTC m=+59.115741790 (delta=63.544456ms)
	I0205 03:10:08.566003   58877 fix.go:200] guest clock delta is within tolerance: 63.544456ms
	I0205 03:10:08.566007   58877 start.go:83] releasing machines lock for "kubernetes-upgrade-024079", held for 26.207006693s
	I0205 03:10:08.566034   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .DriverName
	I0205 03:10:08.566312   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetIP
	I0205 03:10:08.569164   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:10:08.569511   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:45:f7", ip: ""} in network mk-kubernetes-upgrade-024079: {Iface:virbr3 ExpiryTime:2025-02-05 04:09:57 +0000 UTC Type:0 Mac:52:54:00:01:45:f7 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:kubernetes-upgrade-024079 Clientid:01:52:54:00:01:45:f7}
	I0205 03:10:08.569545   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined IP address 192.168.61.227 and MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:10:08.569671   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .DriverName
	I0205 03:10:08.570245   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .DriverName
	I0205 03:10:08.570425   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .DriverName
	I0205 03:10:08.570551   58877 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0205 03:10:08.570601   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHHostname
	I0205 03:10:08.570645   58877 ssh_runner.go:195] Run: cat /version.json
	I0205 03:10:08.570676   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHHostname
	I0205 03:10:08.573308   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:10:08.573636   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:10:08.573671   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:45:f7", ip: ""} in network mk-kubernetes-upgrade-024079: {Iface:virbr3 ExpiryTime:2025-02-05 04:09:57 +0000 UTC Type:0 Mac:52:54:00:01:45:f7 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:kubernetes-upgrade-024079 Clientid:01:52:54:00:01:45:f7}
	I0205 03:10:08.573693   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined IP address 192.168.61.227 and MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:10:08.573806   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHPort
	I0205 03:10:08.573979   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHKeyPath
	I0205 03:10:08.574099   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHUsername
	I0205 03:10:08.574136   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:45:f7", ip: ""} in network mk-kubernetes-upgrade-024079: {Iface:virbr3 ExpiryTime:2025-02-05 04:09:57 +0000 UTC Type:0 Mac:52:54:00:01:45:f7 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:kubernetes-upgrade-024079 Clientid:01:52:54:00:01:45:f7}
	I0205 03:10:08.574169   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined IP address 192.168.61.227 and MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:10:08.574234   58877 sshutil.go:53] new ssh client: &{IP:192.168.61.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/kubernetes-upgrade-024079/id_rsa Username:docker}
	I0205 03:10:08.574354   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHPort
	I0205 03:10:08.574476   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHKeyPath
	I0205 03:10:08.574597   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHUsername
	I0205 03:10:08.574725   58877 sshutil.go:53] new ssh client: &{IP:192.168.61.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/kubernetes-upgrade-024079/id_rsa Username:docker}
	I0205 03:10:08.674840   58877 ssh_runner.go:195] Run: systemctl --version
	I0205 03:10:08.681204   58877 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0205 03:10:08.854190   58877 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0205 03:10:08.859916   58877 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0205 03:10:08.859992   58877 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0205 03:10:08.875930   58877 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0205 03:10:08.875955   58877 start.go:495] detecting cgroup driver to use...
	I0205 03:10:08.876009   58877 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0205 03:10:08.892436   58877 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0205 03:10:08.907253   58877 docker.go:217] disabling cri-docker service (if available) ...
	I0205 03:10:08.907311   58877 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0205 03:10:08.920506   58877 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0205 03:10:08.934051   58877 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0205 03:10:09.049426   58877 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0205 03:10:09.215839   58877 docker.go:233] disabling docker service ...
	I0205 03:10:09.215926   58877 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0205 03:10:09.230232   58877 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0205 03:10:09.243148   58877 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0205 03:10:09.375322   58877 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0205 03:10:09.508277   58877 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0205 03:10:09.522177   58877 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0205 03:10:09.542046   58877 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0205 03:10:09.542106   58877 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:10:09.552548   58877 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0205 03:10:09.552639   58877 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:10:09.562796   58877 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:10:09.575115   58877 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:10:09.587074   58877 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0205 03:10:09.598814   58877 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0205 03:10:09.609114   58877 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0205 03:10:09.609169   58877 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0205 03:10:09.623101   58877 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0205 03:10:09.635816   58877 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0205 03:10:09.775347   58877 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0205 03:10:09.866793   58877 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0205 03:10:09.866873   58877 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0205 03:10:09.871592   58877 start.go:563] Will wait 60s for crictl version
	I0205 03:10:09.871657   58877 ssh_runner.go:195] Run: which crictl
	I0205 03:10:09.875622   58877 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0205 03:10:09.922194   58877 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0205 03:10:09.922268   58877 ssh_runner.go:195] Run: crio --version
	I0205 03:10:09.953763   58877 ssh_runner.go:195] Run: crio --version
	I0205 03:10:09.985842   58877 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0205 03:10:09.987005   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetIP
	I0205 03:10:09.989755   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:10:09.990151   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:45:f7", ip: ""} in network mk-kubernetes-upgrade-024079: {Iface:virbr3 ExpiryTime:2025-02-05 04:09:57 +0000 UTC Type:0 Mac:52:54:00:01:45:f7 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:kubernetes-upgrade-024079 Clientid:01:52:54:00:01:45:f7}
	I0205 03:10:09.990178   58877 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined IP address 192.168.61.227 and MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:10:09.990470   58877 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0205 03:10:09.994912   58877 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0205 03:10:10.008350   58877 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-024079 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-024079 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.227 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0205 03:10:10.008444   58877 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0205 03:10:10.008486   58877 ssh_runner.go:195] Run: sudo crictl images --output json
	I0205 03:10:10.053504   58877 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0205 03:10:10.053576   58877 ssh_runner.go:195] Run: which lz4
	I0205 03:10:10.057604   58877 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0205 03:10:10.063797   58877 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0205 03:10:10.063832   58877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0205 03:10:11.580560   58877 crio.go:462] duration metric: took 1.522985177s to copy over tarball
	I0205 03:10:11.580638   58877 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0205 03:10:14.281173   58877 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.700501875s)
	I0205 03:10:14.281213   58877 crio.go:469] duration metric: took 2.70061942s to extract the tarball
	I0205 03:10:14.281222   58877 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0205 03:10:14.325692   58877 ssh_runner.go:195] Run: sudo crictl images --output json
	I0205 03:10:14.371487   58877 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0205 03:10:14.371518   58877 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0205 03:10:14.371595   58877 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0205 03:10:14.371620   58877 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0205 03:10:14.371644   58877 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0205 03:10:14.371652   58877 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0205 03:10:14.371677   58877 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0205 03:10:14.371683   58877 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0205 03:10:14.371619   58877 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0205 03:10:14.371708   58877 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0205 03:10:14.373319   58877 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0205 03:10:14.373325   58877 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0205 03:10:14.373319   58877 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0205 03:10:14.373372   58877 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0205 03:10:14.373349   58877 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0205 03:10:14.373411   58877 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0205 03:10:14.373425   58877 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0205 03:10:14.373414   58877 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0205 03:10:14.508083   58877 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0205 03:10:14.508086   58877 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0205 03:10:14.515328   58877 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0205 03:10:14.524347   58877 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0205 03:10:14.532039   58877 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0205 03:10:14.561674   58877 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0205 03:10:14.593850   58877 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0205 03:10:14.593897   58877 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0205 03:10:14.593940   58877 ssh_runner.go:195] Run: which crictl
	I0205 03:10:14.616547   58877 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0205 03:10:14.616603   58877 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0205 03:10:14.616689   58877 ssh_runner.go:195] Run: which crictl
	I0205 03:10:14.637981   58877 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0205 03:10:14.654791   58877 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0205 03:10:14.654839   58877 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0205 03:10:14.654887   58877 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0205 03:10:14.654920   58877 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0205 03:10:14.654919   58877 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0205 03:10:14.654937   58877 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0205 03:10:14.654958   58877 ssh_runner.go:195] Run: which crictl
	I0205 03:10:14.654965   58877 ssh_runner.go:195] Run: which crictl
	I0205 03:10:14.654893   58877 ssh_runner.go:195] Run: which crictl
	I0205 03:10:14.683525   58877 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0205 03:10:14.683569   58877 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0205 03:10:14.683612   58877 ssh_runner.go:195] Run: which crictl
	I0205 03:10:14.683623   58877 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0205 03:10:14.683629   58877 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0205 03:10:14.718359   58877 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0205 03:10:14.718459   58877 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0205 03:10:14.718515   58877 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0205 03:10:14.718552   58877 ssh_runner.go:195] Run: which crictl
	I0205 03:10:14.718400   58877 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0205 03:10:14.718483   58877 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0205 03:10:14.718463   58877 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0205 03:10:14.799882   58877 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0205 03:10:14.811927   58877 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0205 03:10:14.856771   58877 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0205 03:10:14.856896   58877 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0205 03:10:14.856977   58877 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0205 03:10:14.856975   58877 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0205 03:10:14.857053   58877 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0205 03:10:14.910909   58877 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0205 03:10:14.924582   58877 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0205 03:10:15.000854   58877 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0205 03:10:15.022974   58877 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0205 03:10:15.023067   58877 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0205 03:10:15.023631   58877 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0205 03:10:15.023723   58877 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0205 03:10:15.089603   58877 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20363-12788/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0205 03:10:15.089682   58877 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20363-12788/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0205 03:10:15.107697   58877 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20363-12788/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0205 03:10:15.158690   58877 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20363-12788/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0205 03:10:15.158807   58877 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20363-12788/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0205 03:10:15.158836   58877 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20363-12788/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0205 03:10:15.158894   58877 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0205 03:10:15.191830   58877 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20363-12788/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0205 03:10:15.294326   58877 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0205 03:10:15.438883   58877 cache_images.go:92] duration metric: took 1.067346204s to LoadCachedImages
	W0205 03:10:15.439015   58877 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20363-12788/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20363-12788/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0205 03:10:15.439036   58877 kubeadm.go:934] updating node { 192.168.61.227 8443 v1.20.0 crio true true} ...
	I0205 03:10:15.439190   58877 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-024079 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-024079 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0205 03:10:15.439289   58877 ssh_runner.go:195] Run: crio config
	I0205 03:10:15.486691   58877 cni.go:84] Creating CNI manager for ""
	I0205 03:10:15.486713   58877 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0205 03:10:15.486722   58877 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0205 03:10:15.486741   58877 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.227 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-024079 NodeName:kubernetes-upgrade-024079 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.227"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.227 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0205 03:10:15.486875   58877 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.227
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-024079"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.227
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.227"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0205 03:10:15.486932   58877 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0205 03:10:15.497128   58877 binaries.go:44] Found k8s binaries, skipping transfer
	I0205 03:10:15.497217   58877 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0205 03:10:15.510604   58877 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0205 03:10:15.527960   58877 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0205 03:10:15.546275   58877 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0205 03:10:15.563736   58877 ssh_runner.go:195] Run: grep 192.168.61.227	control-plane.minikube.internal$ /etc/hosts
	I0205 03:10:15.567635   58877 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.227	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0205 03:10:15.579701   58877 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0205 03:10:15.708655   58877 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0205 03:10:15.726221   58877 certs.go:68] Setting up /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kubernetes-upgrade-024079 for IP: 192.168.61.227
	I0205 03:10:15.726243   58877 certs.go:194] generating shared ca certs ...
	I0205 03:10:15.726262   58877 certs.go:226] acquiring lock for ca certs: {Name:mkca3cbb562bd1b517883c3b07fce8acc6cc9038 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:10:15.726444   58877 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20363-12788/.minikube/ca.key
	I0205 03:10:15.726513   58877 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20363-12788/.minikube/proxy-client-ca.key
	I0205 03:10:15.726527   58877 certs.go:256] generating profile certs ...
	I0205 03:10:15.726598   58877 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kubernetes-upgrade-024079/client.key
	I0205 03:10:15.726622   58877 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kubernetes-upgrade-024079/client.crt with IP's: []
	I0205 03:10:16.022752   58877 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kubernetes-upgrade-024079/client.crt ...
	I0205 03:10:16.022783   58877 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kubernetes-upgrade-024079/client.crt: {Name:mk6a208cd45a53e0727d490e51a70f4111d84434 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:10:16.022980   58877 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kubernetes-upgrade-024079/client.key ...
	I0205 03:10:16.023002   58877 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kubernetes-upgrade-024079/client.key: {Name:mk1792f7719abd3955ad2d75d3814c85c8f51450 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:10:16.023095   58877 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kubernetes-upgrade-024079/apiserver.key.6ce9ee11
	I0205 03:10:16.023124   58877 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kubernetes-upgrade-024079/apiserver.crt.6ce9ee11 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.227]
	I0205 03:10:16.289527   58877 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kubernetes-upgrade-024079/apiserver.crt.6ce9ee11 ...
	I0205 03:10:16.289563   58877 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kubernetes-upgrade-024079/apiserver.crt.6ce9ee11: {Name:mk2c85cd28521a4bfa926f54831144b34960d837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:10:16.289728   58877 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kubernetes-upgrade-024079/apiserver.key.6ce9ee11 ...
	I0205 03:10:16.289744   58877 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kubernetes-upgrade-024079/apiserver.key.6ce9ee11: {Name:mk0c6932f900b128a8c96b9d7a8816d0002011b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:10:16.289818   58877 certs.go:381] copying /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kubernetes-upgrade-024079/apiserver.crt.6ce9ee11 -> /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kubernetes-upgrade-024079/apiserver.crt
	I0205 03:10:16.289888   58877 certs.go:385] copying /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kubernetes-upgrade-024079/apiserver.key.6ce9ee11 -> /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kubernetes-upgrade-024079/apiserver.key
	I0205 03:10:16.289946   58877 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kubernetes-upgrade-024079/proxy-client.key
	I0205 03:10:16.289961   58877 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kubernetes-upgrade-024079/proxy-client.crt with IP's: []
	I0205 03:10:16.426183   58877 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kubernetes-upgrade-024079/proxy-client.crt ...
	I0205 03:10:16.426213   58877 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kubernetes-upgrade-024079/proxy-client.crt: {Name:mk76d29e8be5294cb5d4bd746aba967b8a2644c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:10:16.426400   58877 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kubernetes-upgrade-024079/proxy-client.key ...
	I0205 03:10:16.426415   58877 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kubernetes-upgrade-024079/proxy-client.key: {Name:mk170438e89dbde6f0dabf1edb9ca6638997f8a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:10:16.426651   58877 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/19989.pem (1338 bytes)
	W0205 03:10:16.426714   58877 certs.go:480] ignoring /home/jenkins/minikube-integration/20363-12788/.minikube/certs/19989_empty.pem, impossibly tiny 0 bytes
	I0205 03:10:16.426730   58877 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca-key.pem (1679 bytes)
	I0205 03:10:16.426765   58877 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem (1082 bytes)
	I0205 03:10:16.426794   58877 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/cert.pem (1123 bytes)
	I0205 03:10:16.426816   58877 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/key.pem (1675 bytes)
	I0205 03:10:16.426856   58877 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/files/etc/ssl/certs/199892.pem (1708 bytes)
	I0205 03:10:16.427502   58877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0205 03:10:16.459580   58877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0205 03:10:16.485206   58877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0205 03:10:16.510409   58877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0205 03:10:16.537897   58877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kubernetes-upgrade-024079/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0205 03:10:16.563899   58877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kubernetes-upgrade-024079/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0205 03:10:16.589401   58877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kubernetes-upgrade-024079/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0205 03:10:16.627587   58877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kubernetes-upgrade-024079/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0205 03:10:16.652445   58877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/files/etc/ssl/certs/199892.pem --> /usr/share/ca-certificates/199892.pem (1708 bytes)
	I0205 03:10:16.677032   58877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0205 03:10:16.700656   58877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/certs/19989.pem --> /usr/share/ca-certificates/19989.pem (1338 bytes)
	I0205 03:10:16.724649   58877 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0205 03:10:16.743510   58877 ssh_runner.go:195] Run: openssl version
	I0205 03:10:16.749680   58877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/199892.pem && ln -fs /usr/share/ca-certificates/199892.pem /etc/ssl/certs/199892.pem"
	I0205 03:10:16.765332   58877 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/199892.pem
	I0205 03:10:16.771549   58877 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb  5 02:11 /usr/share/ca-certificates/199892.pem
	I0205 03:10:16.771625   58877 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/199892.pem
	I0205 03:10:16.777816   58877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/199892.pem /etc/ssl/certs/3ec20f2e.0"
	I0205 03:10:16.793721   58877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0205 03:10:16.805696   58877 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0205 03:10:16.810470   58877 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb  5 02:04 /usr/share/ca-certificates/minikubeCA.pem
	I0205 03:10:16.810539   58877 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0205 03:10:16.816215   58877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0205 03:10:16.826878   58877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19989.pem && ln -fs /usr/share/ca-certificates/19989.pem /etc/ssl/certs/19989.pem"
	I0205 03:10:16.837636   58877 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19989.pem
	I0205 03:10:16.841929   58877 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb  5 02:11 /usr/share/ca-certificates/19989.pem
	I0205 03:10:16.841988   58877 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19989.pem
	I0205 03:10:16.847842   58877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/19989.pem /etc/ssl/certs/51391683.0"
	I0205 03:10:16.858680   58877 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0205 03:10:16.862680   58877 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0205 03:10:16.862744   58877 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-024079 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-024079 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.227 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0205 03:10:16.862834   58877 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0205 03:10:16.862891   58877 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0205 03:10:16.900336   58877 cri.go:89] found id: ""
	I0205 03:10:16.900439   58877 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0205 03:10:16.910625   58877 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0205 03:10:16.921804   58877 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0205 03:10:16.932019   58877 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0205 03:10:16.932047   58877 kubeadm.go:157] found existing configuration files:
	
	I0205 03:10:16.932150   58877 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0205 03:10:16.945386   58877 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0205 03:10:16.945456   58877 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0205 03:10:16.959008   58877 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0205 03:10:16.972081   58877 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0205 03:10:16.972153   58877 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0205 03:10:16.982699   58877 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0205 03:10:16.992688   58877 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0205 03:10:16.992779   58877 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0205 03:10:17.002698   58877 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0205 03:10:17.012128   58877 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0205 03:10:17.012195   58877 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0205 03:10:17.022281   58877 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0205 03:10:17.138593   58877 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0205 03:10:17.138719   58877 kubeadm.go:310] [preflight] Running pre-flight checks
	I0205 03:10:17.297294   58877 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0205 03:10:17.297447   58877 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0205 03:10:17.297600   58877 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0205 03:10:17.519701   58877 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0205 03:10:17.521829   58877 out.go:235]   - Generating certificates and keys ...
	I0205 03:10:17.521957   58877 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0205 03:10:17.522049   58877 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0205 03:10:17.725760   58877 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0205 03:10:17.796134   58877 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0205 03:10:18.149183   58877 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0205 03:10:18.460773   58877 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0205 03:10:18.648987   58877 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0205 03:10:18.649256   58877 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-024079 localhost] and IPs [192.168.61.227 127.0.0.1 ::1]
	I0205 03:10:18.882333   58877 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0205 03:10:18.882668   58877 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-024079 localhost] and IPs [192.168.61.227 127.0.0.1 ::1]
	I0205 03:10:18.980339   58877 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0205 03:10:19.210517   58877 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0205 03:10:19.486697   58877 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0205 03:10:19.486949   58877 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0205 03:10:19.652620   58877 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0205 03:10:19.856993   58877 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0205 03:10:19.967574   58877 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0205 03:10:20.147613   58877 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0205 03:10:20.168943   58877 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0205 03:10:20.170018   58877 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0205 03:10:20.170096   58877 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0205 03:10:20.299998   58877 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0205 03:10:20.301581   58877 out.go:235]   - Booting up control plane ...
	I0205 03:10:20.301706   58877 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0205 03:10:20.309086   58877 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0205 03:10:20.309189   58877 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0205 03:10:20.314340   58877 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0205 03:10:20.319308   58877 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0205 03:11:00.285972   58877 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0205 03:11:00.286464   58877 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0205 03:11:00.286683   58877 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0205 03:11:05.286111   58877 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0205 03:11:05.286424   58877 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0205 03:11:15.285545   58877 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0205 03:11:15.285854   58877 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0205 03:11:35.285227   58877 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0205 03:11:35.285472   58877 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0205 03:12:15.284936   58877 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0205 03:12:15.285232   58877 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0205 03:12:15.285256   58877 kubeadm.go:310] 
	I0205 03:12:15.285312   58877 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0205 03:12:15.285414   58877 kubeadm.go:310] 		timed out waiting for the condition
	I0205 03:12:15.285448   58877 kubeadm.go:310] 
	I0205 03:12:15.285506   58877 kubeadm.go:310] 	This error is likely caused by:
	I0205 03:12:15.285551   58877 kubeadm.go:310] 		- The kubelet is not running
	I0205 03:12:15.285699   58877 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0205 03:12:15.285733   58877 kubeadm.go:310] 
	I0205 03:12:15.285873   58877 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0205 03:12:15.285922   58877 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0205 03:12:15.285965   58877 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0205 03:12:15.285974   58877 kubeadm.go:310] 
	I0205 03:12:15.286094   58877 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0205 03:12:15.286194   58877 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0205 03:12:15.286206   58877 kubeadm.go:310] 
	I0205 03:12:15.286334   58877 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0205 03:12:15.286453   58877 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0205 03:12:15.286551   58877 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0205 03:12:15.286639   58877 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0205 03:12:15.286651   58877 kubeadm.go:310] 
	I0205 03:12:15.286812   58877 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0205 03:12:15.286920   58877 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0205 03:12:15.287004   58877 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0205 03:12:15.287144   58877 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-024079 localhost] and IPs [192.168.61.227 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-024079 localhost] and IPs [192.168.61.227 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-024079 localhost] and IPs [192.168.61.227 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-024079 localhost] and IPs [192.168.61.227 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0205 03:12:15.287193   58877 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0205 03:12:16.366363   58877 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.079134587s)
	I0205 03:12:16.366450   58877 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0205 03:12:16.380456   58877 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0205 03:12:16.390034   58877 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0205 03:12:16.390060   58877 kubeadm.go:157] found existing configuration files:
	
	I0205 03:12:16.390106   58877 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0205 03:12:16.398997   58877 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0205 03:12:16.399054   58877 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0205 03:12:16.409431   58877 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0205 03:12:16.420730   58877 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0205 03:12:16.420798   58877 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0205 03:12:16.432957   58877 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0205 03:12:16.441938   58877 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0205 03:12:16.442017   58877 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0205 03:12:16.452609   58877 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0205 03:12:16.463862   58877 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0205 03:12:16.463948   58877 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0205 03:12:16.476971   58877 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0205 03:12:16.555608   58877 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0205 03:12:16.555761   58877 kubeadm.go:310] [preflight] Running pre-flight checks
	I0205 03:12:16.735232   58877 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0205 03:12:16.735377   58877 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0205 03:12:16.735482   58877 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0205 03:12:16.930863   58877 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0205 03:12:16.932632   58877 out.go:235]   - Generating certificates and keys ...
	I0205 03:12:16.932734   58877 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0205 03:12:16.932818   58877 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0205 03:12:16.932950   58877 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0205 03:12:16.933040   58877 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0205 03:12:16.933137   58877 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0205 03:12:16.933235   58877 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0205 03:12:16.933372   58877 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0205 03:12:16.933474   58877 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0205 03:12:16.933589   58877 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0205 03:12:16.933705   58877 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0205 03:12:16.933767   58877 kubeadm.go:310] [certs] Using the existing "sa" key
	I0205 03:12:16.933848   58877 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0205 03:12:16.992914   58877 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0205 03:12:17.050399   58877 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0205 03:12:17.166345   58877 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0205 03:12:17.371653   58877 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0205 03:12:17.385934   58877 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0205 03:12:17.387054   58877 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0205 03:12:17.387133   58877 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0205 03:12:17.524773   58877 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0205 03:12:17.526688   58877 out.go:235]   - Booting up control plane ...
	I0205 03:12:17.526814   58877 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0205 03:12:17.529207   58877 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0205 03:12:17.531542   58877 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0205 03:12:17.532560   58877 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0205 03:12:17.538131   58877 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0205 03:12:57.540058   58877 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0205 03:12:57.540373   58877 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0205 03:12:57.540660   58877 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0205 03:13:02.541037   58877 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0205 03:13:02.541373   58877 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0205 03:13:12.541942   58877 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0205 03:13:12.542228   58877 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0205 03:13:32.543525   58877 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0205 03:13:32.543845   58877 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0205 03:14:12.543438   58877 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0205 03:14:12.543757   58877 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0205 03:14:12.543783   58877 kubeadm.go:310] 
	I0205 03:14:12.543842   58877 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0205 03:14:12.543913   58877 kubeadm.go:310] 		timed out waiting for the condition
	I0205 03:14:12.543927   58877 kubeadm.go:310] 
	I0205 03:14:12.543983   58877 kubeadm.go:310] 	This error is likely caused by:
	I0205 03:14:12.544053   58877 kubeadm.go:310] 		- The kubelet is not running
	I0205 03:14:12.544245   58877 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0205 03:14:12.544268   58877 kubeadm.go:310] 
	I0205 03:14:12.544409   58877 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0205 03:14:12.544464   58877 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0205 03:14:12.544517   58877 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0205 03:14:12.544527   58877 kubeadm.go:310] 
	I0205 03:14:12.544659   58877 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0205 03:14:12.544797   58877 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0205 03:14:12.544813   58877 kubeadm.go:310] 
	I0205 03:14:12.545007   58877 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0205 03:14:12.545151   58877 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0205 03:14:12.545256   58877 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0205 03:14:12.545376   58877 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0205 03:14:12.545392   58877 kubeadm.go:310] 
	I0205 03:14:12.545747   58877 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0205 03:14:12.545878   58877 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0205 03:14:12.545989   58877 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0205 03:14:12.546030   58877 kubeadm.go:394] duration metric: took 3m55.683289618s to StartCluster
	I0205 03:14:12.546083   58877 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:14:12.546148   58877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:14:12.607938   58877 cri.go:89] found id: ""
	I0205 03:14:12.607967   58877 logs.go:282] 0 containers: []
	W0205 03:14:12.607977   58877 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:14:12.607984   58877 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:14:12.608056   58877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:14:12.648004   58877 cri.go:89] found id: ""
	I0205 03:14:12.648038   58877 logs.go:282] 0 containers: []
	W0205 03:14:12.648049   58877 logs.go:284] No container was found matching "etcd"
	I0205 03:14:12.648058   58877 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:14:12.648160   58877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:14:12.690423   58877 cri.go:89] found id: ""
	I0205 03:14:12.690461   58877 logs.go:282] 0 containers: []
	W0205 03:14:12.690472   58877 logs.go:284] No container was found matching "coredns"
	I0205 03:14:12.690480   58877 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:14:12.690550   58877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:14:12.724699   58877 cri.go:89] found id: ""
	I0205 03:14:12.724732   58877 logs.go:282] 0 containers: []
	W0205 03:14:12.724749   58877 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:14:12.724758   58877 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:14:12.724821   58877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:14:12.762872   58877 cri.go:89] found id: ""
	I0205 03:14:12.762900   58877 logs.go:282] 0 containers: []
	W0205 03:14:12.762908   58877 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:14:12.762914   58877 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:14:12.762962   58877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:14:12.799470   58877 cri.go:89] found id: ""
	I0205 03:14:12.799499   58877 logs.go:282] 0 containers: []
	W0205 03:14:12.799506   58877 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:14:12.799513   58877 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:14:12.799576   58877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:14:12.833589   58877 cri.go:89] found id: ""
	I0205 03:14:12.833623   58877 logs.go:282] 0 containers: []
	W0205 03:14:12.833633   58877 logs.go:284] No container was found matching "kindnet"
	I0205 03:14:12.833645   58877 logs.go:123] Gathering logs for kubelet ...
	I0205 03:14:12.833659   58877 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:14:12.885945   58877 logs.go:123] Gathering logs for dmesg ...
	I0205 03:14:12.885988   58877 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:14:12.899565   58877 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:14:12.899593   58877 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:14:13.027727   58877 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:14:13.027761   58877 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:14:13.027778   58877 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:14:13.158999   58877 logs.go:123] Gathering logs for container status ...
	I0205 03:14:13.159049   58877 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0205 03:14:13.208373   58877 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0205 03:14:13.208449   58877 out.go:270] * 
	* 
	W0205 03:14:13.208518   58877 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0205 03:14:13.208601   58877 out.go:270] * 
	* 
	W0205 03:14:13.209824   58877 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0205 03:14:13.214160   58877 out.go:201] 
	W0205 03:14:13.215473   58877 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0205 03:14:13.215539   58877 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0205 03:14:13.215570   58877 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0205 03:14:13.217054   58877 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-024079 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-024079
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-024079: (6.334112729s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-024079 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-024079 status --format={{.Host}}: exit status 7 (67.815037ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-024079 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-024079 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (38.86510767s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-024079 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-024079 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-024079 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (83.288541ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-024079] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20363
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20363-12788/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20363-12788/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-024079
	    minikube start -p kubernetes-upgrade-024079 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0240792 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.1, by running:
	    
	    minikube start -p kubernetes-upgrade-024079 --kubernetes-version=v1.32.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-024079 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-024079 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (13m50.384050027s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-024079] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20363
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20363-12788/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20363-12788/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "kubernetes-upgrade-024079" primary control-plane node in "kubernetes-upgrade-024079" cluster
	* Updating the running kvm2 "kubernetes-upgrade-024079" VM ...
	* Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0205 03:14:58.746583   62959 out.go:345] Setting OutFile to fd 1 ...
	I0205 03:14:58.746996   62959 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 03:14:58.747013   62959 out.go:358] Setting ErrFile to fd 2...
	I0205 03:14:58.747021   62959 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 03:14:58.747376   62959 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20363-12788/.minikube/bin
	I0205 03:14:58.747915   62959 out.go:352] Setting JSON to false
	I0205 03:14:58.748778   62959 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":7050,"bootTime":1738718249,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0205 03:14:58.748867   62959 start.go:139] virtualization: kvm guest
	I0205 03:14:58.750619   62959 out.go:177] * [kubernetes-upgrade-024079] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0205 03:14:58.751745   62959 out.go:177]   - MINIKUBE_LOCATION=20363
	I0205 03:14:58.751746   62959 notify.go:220] Checking for updates...
	I0205 03:14:58.752945   62959 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0205 03:14:58.754057   62959 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20363-12788/kubeconfig
	I0205 03:14:58.755198   62959 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20363-12788/.minikube
	I0205 03:14:58.756317   62959 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0205 03:14:58.757458   62959 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0205 03:14:58.759019   62959 config.go:182] Loaded profile config "kubernetes-upgrade-024079": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 03:14:58.759447   62959 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:14:58.759516   62959 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:14:58.774117   62959 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39703
	I0205 03:14:58.774543   62959 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:14:58.775098   62959 main.go:141] libmachine: Using API Version  1
	I0205 03:14:58.775118   62959 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:14:58.775465   62959 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:14:58.775668   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .DriverName
	I0205 03:14:58.775962   62959 driver.go:394] Setting default libvirt URI to qemu:///system
	I0205 03:14:58.776351   62959 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:14:58.776396   62959 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:14:58.790787   62959 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39347
	I0205 03:14:58.791205   62959 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:14:58.791651   62959 main.go:141] libmachine: Using API Version  1
	I0205 03:14:58.791671   62959 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:14:58.791986   62959 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:14:58.792160   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .DriverName
	I0205 03:14:58.827632   62959 out.go:177] * Using the kvm2 driver based on existing profile
	I0205 03:14:58.828840   62959 start.go:297] selected driver: kvm2
	I0205 03:14:58.828853   62959 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-024079 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:kubernetes-up
grade-024079 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.227 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0205 03:14:58.828956   62959 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0205 03:14:58.829762   62959 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0205 03:14:58.829866   62959 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20363-12788/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0205 03:14:58.844695   62959 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0205 03:14:58.845110   62959 cni.go:84] Creating CNI manager for ""
	I0205 03:14:58.845168   62959 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0205 03:14:58.845204   62959 start.go:340] cluster config:
	{Name:kubernetes-upgrade-024079 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:kubernetes-upgrade-024079 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.227 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVM
netClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0205 03:14:58.845370   62959 iso.go:125] acquiring lock: {Name:mk486603e8d6546d81b9e7d0a893261360630790 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0205 03:14:58.847192   62959 out.go:177] * Starting "kubernetes-upgrade-024079" primary control-plane node in "kubernetes-upgrade-024079" cluster
	I0205 03:14:58.848238   62959 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0205 03:14:58.848270   62959 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20363-12788/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0205 03:14:58.848284   62959 cache.go:56] Caching tarball of preloaded images
	I0205 03:14:58.848364   62959 preload.go:172] Found /home/jenkins/minikube-integration/20363-12788/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0205 03:14:58.848378   62959 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0205 03:14:58.848489   62959 profile.go:143] Saving config to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kubernetes-upgrade-024079/config.json ...
	I0205 03:14:58.848713   62959 start.go:360] acquireMachinesLock for kubernetes-upgrade-024079: {Name:mka859d8706e94e04a549fcebb98cdac86bfe5a9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0205 03:14:58.848764   62959 start.go:364] duration metric: took 29.051µs to acquireMachinesLock for "kubernetes-upgrade-024079"
	I0205 03:14:58.848785   62959 start.go:96] Skipping create...Using existing machine configuration
	I0205 03:14:58.848793   62959 fix.go:54] fixHost starting: 
	I0205 03:14:58.849077   62959 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:14:58.849113   62959 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:14:58.863240   62959 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35249
	I0205 03:14:58.863655   62959 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:14:58.864077   62959 main.go:141] libmachine: Using API Version  1
	I0205 03:14:58.864109   62959 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:14:58.864454   62959 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:14:58.864652   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .DriverName
	I0205 03:14:58.864817   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetState
	I0205 03:14:58.866399   62959 fix.go:112] recreateIfNeeded on kubernetes-upgrade-024079: state=Running err=<nil>
	W0205 03:14:58.866415   62959 fix.go:138] unexpected machine state, will restart: <nil>
	I0205 03:14:58.868104   62959 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-024079" VM ...
	I0205 03:14:58.869261   62959 machine.go:93] provisionDockerMachine start ...
	I0205 03:14:58.869280   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .DriverName
	I0205 03:14:58.869489   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHHostname
	I0205 03:14:58.871982   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:14:58.872370   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:45:f7", ip: ""} in network mk-kubernetes-upgrade-024079: {Iface:virbr3 ExpiryTime:2025-02-05 04:14:31 +0000 UTC Type:0 Mac:52:54:00:01:45:f7 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:kubernetes-upgrade-024079 Clientid:01:52:54:00:01:45:f7}
	I0205 03:14:58.872388   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined IP address 192.168.61.227 and MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:14:58.872526   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHPort
	I0205 03:14:58.872673   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHKeyPath
	I0205 03:14:58.872800   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHKeyPath
	I0205 03:14:58.872919   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHUsername
	I0205 03:14:58.873037   62959 main.go:141] libmachine: Using SSH client type: native
	I0205 03:14:58.873272   62959 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0205 03:14:58.873290   62959 main.go:141] libmachine: About to run SSH command:
	hostname
	I0205 03:14:58.993474   62959 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-024079
	
	I0205 03:14:58.993502   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetMachineName
	I0205 03:14:58.993788   62959 buildroot.go:166] provisioning hostname "kubernetes-upgrade-024079"
	I0205 03:14:58.993805   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetMachineName
	I0205 03:14:58.994014   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHHostname
	I0205 03:14:58.996601   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:14:58.996930   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:45:f7", ip: ""} in network mk-kubernetes-upgrade-024079: {Iface:virbr3 ExpiryTime:2025-02-05 04:14:31 +0000 UTC Type:0 Mac:52:54:00:01:45:f7 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:kubernetes-upgrade-024079 Clientid:01:52:54:00:01:45:f7}
	I0205 03:14:58.996958   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined IP address 192.168.61.227 and MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:14:58.997167   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHPort
	I0205 03:14:58.997316   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHKeyPath
	I0205 03:14:58.997477   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHKeyPath
	I0205 03:14:58.997609   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHUsername
	I0205 03:14:58.997751   62959 main.go:141] libmachine: Using SSH client type: native
	I0205 03:14:58.997933   62959 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0205 03:14:58.997950   62959 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-024079 && echo "kubernetes-upgrade-024079" | sudo tee /etc/hostname
	I0205 03:14:59.146881   62959 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-024079
	
	I0205 03:14:59.146906   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHHostname
	I0205 03:14:59.149945   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:14:59.150485   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:45:f7", ip: ""} in network mk-kubernetes-upgrade-024079: {Iface:virbr3 ExpiryTime:2025-02-05 04:14:31 +0000 UTC Type:0 Mac:52:54:00:01:45:f7 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:kubernetes-upgrade-024079 Clientid:01:52:54:00:01:45:f7}
	I0205 03:14:59.150519   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined IP address 192.168.61.227 and MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:14:59.150739   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHPort
	I0205 03:14:59.150928   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHKeyPath
	I0205 03:14:59.151106   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHKeyPath
	I0205 03:14:59.151248   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHUsername
	I0205 03:14:59.151449   62959 main.go:141] libmachine: Using SSH client type: native
	I0205 03:14:59.151633   62959 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0205 03:14:59.151650   62959 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-024079' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-024079/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-024079' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0205 03:14:59.281869   62959 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0205 03:14:59.281901   62959 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20363-12788/.minikube CaCertPath:/home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20363-12788/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20363-12788/.minikube}
	I0205 03:14:59.281939   62959 buildroot.go:174] setting up certificates
	I0205 03:14:59.281951   62959 provision.go:84] configureAuth start
	I0205 03:14:59.281966   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetMachineName
	I0205 03:14:59.282273   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetIP
	I0205 03:14:59.285184   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:14:59.285534   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:45:f7", ip: ""} in network mk-kubernetes-upgrade-024079: {Iface:virbr3 ExpiryTime:2025-02-05 04:14:31 +0000 UTC Type:0 Mac:52:54:00:01:45:f7 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:kubernetes-upgrade-024079 Clientid:01:52:54:00:01:45:f7}
	I0205 03:14:59.285567   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined IP address 192.168.61.227 and MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:14:59.285747   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHHostname
	I0205 03:14:59.287808   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:14:59.288164   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:45:f7", ip: ""} in network mk-kubernetes-upgrade-024079: {Iface:virbr3 ExpiryTime:2025-02-05 04:14:31 +0000 UTC Type:0 Mac:52:54:00:01:45:f7 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:kubernetes-upgrade-024079 Clientid:01:52:54:00:01:45:f7}
	I0205 03:14:59.288200   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined IP address 192.168.61.227 and MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:14:59.288310   62959 provision.go:143] copyHostCerts
	I0205 03:14:59.288376   62959 exec_runner.go:144] found /home/jenkins/minikube-integration/20363-12788/.minikube/ca.pem, removing ...
	I0205 03:14:59.288391   62959 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20363-12788/.minikube/ca.pem
	I0205 03:14:59.288464   62959 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20363-12788/.minikube/ca.pem (1082 bytes)
	I0205 03:14:59.288589   62959 exec_runner.go:144] found /home/jenkins/minikube-integration/20363-12788/.minikube/cert.pem, removing ...
	I0205 03:14:59.288602   62959 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20363-12788/.minikube/cert.pem
	I0205 03:14:59.288632   62959 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20363-12788/.minikube/cert.pem (1123 bytes)
	I0205 03:14:59.288702   62959 exec_runner.go:144] found /home/jenkins/minikube-integration/20363-12788/.minikube/key.pem, removing ...
	I0205 03:14:59.288712   62959 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20363-12788/.minikube/key.pem
	I0205 03:14:59.288739   62959 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20363-12788/.minikube/key.pem (1675 bytes)
	I0205 03:14:59.288802   62959 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20363-12788/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-024079 san=[127.0.0.1 192.168.61.227 kubernetes-upgrade-024079 localhost minikube]
	I0205 03:14:59.400828   62959 provision.go:177] copyRemoteCerts
	I0205 03:14:59.400889   62959 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0205 03:14:59.400914   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHHostname
	I0205 03:14:59.403685   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:14:59.404022   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:45:f7", ip: ""} in network mk-kubernetes-upgrade-024079: {Iface:virbr3 ExpiryTime:2025-02-05 04:14:31 +0000 UTC Type:0 Mac:52:54:00:01:45:f7 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:kubernetes-upgrade-024079 Clientid:01:52:54:00:01:45:f7}
	I0205 03:14:59.404058   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined IP address 192.168.61.227 and MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:14:59.404286   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHPort
	I0205 03:14:59.404498   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHKeyPath
	I0205 03:14:59.404655   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHUsername
	I0205 03:14:59.404819   62959 sshutil.go:53] new ssh client: &{IP:192.168.61.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/kubernetes-upgrade-024079/id_rsa Username:docker}
	I0205 03:14:59.534818   62959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0205 03:14:59.562587   62959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0205 03:14:59.619609   62959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0205 03:14:59.654400   62959 provision.go:87] duration metric: took 372.434343ms to configureAuth
	I0205 03:14:59.654440   62959 buildroot.go:189] setting minikube options for container-runtime
	I0205 03:14:59.654693   62959 config.go:182] Loaded profile config "kubernetes-upgrade-024079": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 03:14:59.654787   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHHostname
	I0205 03:14:59.657913   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:14:59.658338   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:45:f7", ip: ""} in network mk-kubernetes-upgrade-024079: {Iface:virbr3 ExpiryTime:2025-02-05 04:14:31 +0000 UTC Type:0 Mac:52:54:00:01:45:f7 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:kubernetes-upgrade-024079 Clientid:01:52:54:00:01:45:f7}
	I0205 03:14:59.658393   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined IP address 192.168.61.227 and MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:14:59.658601   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHPort
	I0205 03:14:59.658814   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHKeyPath
	I0205 03:14:59.658981   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHKeyPath
	I0205 03:14:59.659110   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHUsername
	I0205 03:14:59.659279   62959 main.go:141] libmachine: Using SSH client type: native
	I0205 03:14:59.659500   62959 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0205 03:14:59.659523   62959 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0205 03:15:00.631526   62959 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0205 03:15:00.631556   62959 machine.go:96] duration metric: took 1.762282449s to provisionDockerMachine
	I0205 03:15:00.631570   62959 start.go:293] postStartSetup for "kubernetes-upgrade-024079" (driver="kvm2")
	I0205 03:15:00.631583   62959 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0205 03:15:00.631606   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .DriverName
	I0205 03:15:00.631878   62959 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0205 03:15:00.631907   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHHostname
	I0205 03:15:00.634640   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:15:00.634985   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:45:f7", ip: ""} in network mk-kubernetes-upgrade-024079: {Iface:virbr3 ExpiryTime:2025-02-05 04:14:31 +0000 UTC Type:0 Mac:52:54:00:01:45:f7 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:kubernetes-upgrade-024079 Clientid:01:52:54:00:01:45:f7}
	I0205 03:15:00.635013   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined IP address 192.168.61.227 and MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:15:00.635161   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHPort
	I0205 03:15:00.635341   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHKeyPath
	I0205 03:15:00.635513   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHUsername
	I0205 03:15:00.635644   62959 sshutil.go:53] new ssh client: &{IP:192.168.61.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/kubernetes-upgrade-024079/id_rsa Username:docker}
	I0205 03:15:00.719312   62959 ssh_runner.go:195] Run: cat /etc/os-release
	I0205 03:15:00.723515   62959 info.go:137] Remote host: Buildroot 2023.02.9
	I0205 03:15:00.723544   62959 filesync.go:126] Scanning /home/jenkins/minikube-integration/20363-12788/.minikube/addons for local assets ...
	I0205 03:15:00.723611   62959 filesync.go:126] Scanning /home/jenkins/minikube-integration/20363-12788/.minikube/files for local assets ...
	I0205 03:15:00.723706   62959 filesync.go:149] local asset: /home/jenkins/minikube-integration/20363-12788/.minikube/files/etc/ssl/certs/199892.pem -> 199892.pem in /etc/ssl/certs
	I0205 03:15:00.723819   62959 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0205 03:15:00.732913   62959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/files/etc/ssl/certs/199892.pem --> /etc/ssl/certs/199892.pem (1708 bytes)
	I0205 03:15:00.756039   62959 start.go:296] duration metric: took 124.452222ms for postStartSetup
	I0205 03:15:00.756084   62959 fix.go:56] duration metric: took 1.907291081s for fixHost
	I0205 03:15:00.756105   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHHostname
	I0205 03:15:00.758616   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:15:00.759017   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:45:f7", ip: ""} in network mk-kubernetes-upgrade-024079: {Iface:virbr3 ExpiryTime:2025-02-05 04:14:31 +0000 UTC Type:0 Mac:52:54:00:01:45:f7 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:kubernetes-upgrade-024079 Clientid:01:52:54:00:01:45:f7}
	I0205 03:15:00.759040   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined IP address 192.168.61.227 and MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:15:00.759207   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHPort
	I0205 03:15:00.759389   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHKeyPath
	I0205 03:15:00.759540   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHKeyPath
	I0205 03:15:00.759655   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHUsername
	I0205 03:15:00.759808   62959 main.go:141] libmachine: Using SSH client type: native
	I0205 03:15:00.760017   62959 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0205 03:15:00.760029   62959 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0205 03:15:00.874298   62959 main.go:141] libmachine: SSH cmd err, output: <nil>: 1738725300.864568392
	
	I0205 03:15:00.874324   62959 fix.go:216] guest clock: 1738725300.864568392
	I0205 03:15:00.874331   62959 fix.go:229] Guest: 2025-02-05 03:15:00.864568392 +0000 UTC Remote: 2025-02-05 03:15:00.756087924 +0000 UTC m=+2.053375702 (delta=108.480468ms)
	I0205 03:15:00.874361   62959 fix.go:200] guest clock delta is within tolerance: 108.480468ms
	I0205 03:15:00.874365   62959 start.go:83] releasing machines lock for "kubernetes-upgrade-024079", held for 2.025589342s
	I0205 03:15:00.874383   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .DriverName
	I0205 03:15:00.874628   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetIP
	I0205 03:15:00.877082   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:15:00.877407   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:45:f7", ip: ""} in network mk-kubernetes-upgrade-024079: {Iface:virbr3 ExpiryTime:2025-02-05 04:14:31 +0000 UTC Type:0 Mac:52:54:00:01:45:f7 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:kubernetes-upgrade-024079 Clientid:01:52:54:00:01:45:f7}
	I0205 03:15:00.877455   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined IP address 192.168.61.227 and MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:15:00.877557   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .DriverName
	I0205 03:15:00.878037   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .DriverName
	I0205 03:15:00.878215   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .DriverName
	I0205 03:15:00.878323   62959 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0205 03:15:00.878372   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHHostname
	I0205 03:15:00.878382   62959 ssh_runner.go:195] Run: cat /version.json
	I0205 03:15:00.878402   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHHostname
	I0205 03:15:00.881088   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:15:00.881299   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:15:00.881462   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:45:f7", ip: ""} in network mk-kubernetes-upgrade-024079: {Iface:virbr3 ExpiryTime:2025-02-05 04:14:31 +0000 UTC Type:0 Mac:52:54:00:01:45:f7 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:kubernetes-upgrade-024079 Clientid:01:52:54:00:01:45:f7}
	I0205 03:15:00.881492   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined IP address 192.168.61.227 and MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:15:00.881670   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHPort
	I0205 03:15:00.881779   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:45:f7", ip: ""} in network mk-kubernetes-upgrade-024079: {Iface:virbr3 ExpiryTime:2025-02-05 04:14:31 +0000 UTC Type:0 Mac:52:54:00:01:45:f7 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:kubernetes-upgrade-024079 Clientid:01:52:54:00:01:45:f7}
	I0205 03:15:00.881805   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined IP address 192.168.61.227 and MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:15:00.881842   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHKeyPath
	I0205 03:15:00.881993   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHPort
	I0205 03:15:00.882066   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHUsername
	I0205 03:15:00.882141   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHKeyPath
	I0205 03:15:00.882210   62959 sshutil.go:53] new ssh client: &{IP:192.168.61.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/kubernetes-upgrade-024079/id_rsa Username:docker}
	I0205 03:15:00.882255   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetSSHUsername
	I0205 03:15:00.882367   62959 sshutil.go:53] new ssh client: &{IP:192.168.61.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/kubernetes-upgrade-024079/id_rsa Username:docker}
	I0205 03:15:01.059548   62959 ssh_runner.go:195] Run: systemctl --version
	I0205 03:15:01.090688   62959 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0205 03:15:01.327394   62959 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0205 03:15:01.399433   62959 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0205 03:15:01.399539   62959 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0205 03:15:01.444480   62959 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0205 03:15:01.444504   62959 start.go:495] detecting cgroup driver to use...
	I0205 03:15:01.444568   62959 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0205 03:15:01.471046   62959 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0205 03:15:01.493307   62959 docker.go:217] disabling cri-docker service (if available) ...
	I0205 03:15:01.493375   62959 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0205 03:15:01.514420   62959 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0205 03:15:01.540163   62959 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0205 03:15:01.745571   62959 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0205 03:15:01.915506   62959 docker.go:233] disabling docker service ...
	I0205 03:15:01.915602   62959 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0205 03:15:01.934103   62959 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0205 03:15:01.949589   62959 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0205 03:15:02.129787   62959 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0205 03:15:02.300196   62959 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0205 03:15:02.314199   62959 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0205 03:15:02.333584   62959 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0205 03:15:02.333664   62959 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:15:02.344103   62959 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0205 03:15:02.344172   62959 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:15:02.354751   62959 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:15:02.365263   62959 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:15:02.376369   62959 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0205 03:15:02.387443   62959 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:15:02.398058   62959 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:15:02.408711   62959 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:15:02.419087   62959 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0205 03:15:02.428870   62959 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0205 03:15:02.438387   62959 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0205 03:15:02.567594   62959 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0205 03:16:33.054165   62959 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.48653107s)
	I0205 03:16:33.054205   62959 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0205 03:16:33.054266   62959 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0205 03:16:33.063188   62959 start.go:563] Will wait 60s for crictl version
	I0205 03:16:33.063275   62959 ssh_runner.go:195] Run: which crictl
	I0205 03:16:33.067609   62959 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0205 03:16:33.118606   62959 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0205 03:16:33.118698   62959 ssh_runner.go:195] Run: crio --version
	I0205 03:16:33.150039   62959 ssh_runner.go:195] Run: crio --version
	I0205 03:16:33.185041   62959 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0205 03:16:33.186560   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) Calling .GetIP
	I0205 03:16:33.189879   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:16:33.190311   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:45:f7", ip: ""} in network mk-kubernetes-upgrade-024079: {Iface:virbr3 ExpiryTime:2025-02-05 04:14:31 +0000 UTC Type:0 Mac:52:54:00:01:45:f7 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:kubernetes-upgrade-024079 Clientid:01:52:54:00:01:45:f7}
	I0205 03:16:33.190347   62959 main.go:141] libmachine: (kubernetes-upgrade-024079) DBG | domain kubernetes-upgrade-024079 has defined IP address 192.168.61.227 and MAC address 52:54:00:01:45:f7 in network mk-kubernetes-upgrade-024079
	I0205 03:16:33.190640   62959 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0205 03:16:33.197103   62959 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-024079 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:kubernetes-upgrade-024079 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.227 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0205 03:16:33.197245   62959 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0205 03:16:33.197312   62959 ssh_runner.go:195] Run: sudo crictl images --output json
	I0205 03:16:33.250760   62959 crio.go:514] all images are preloaded for cri-o runtime.
	I0205 03:16:33.250793   62959 crio.go:433] Images already preloaded, skipping extraction
	I0205 03:16:33.250857   62959 ssh_runner.go:195] Run: sudo crictl images --output json
	I0205 03:16:33.290134   62959 crio.go:514] all images are preloaded for cri-o runtime.
	I0205 03:16:33.290160   62959 cache_images.go:84] Images are preloaded, skipping loading
	I0205 03:16:33.290169   62959 kubeadm.go:934] updating node { 192.168.61.227 8443 v1.32.1 crio true true} ...
	I0205 03:16:33.290305   62959 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-024079 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:kubernetes-upgrade-024079 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0205 03:16:33.290386   62959 ssh_runner.go:195] Run: crio config
	I0205 03:16:33.346442   62959 cni.go:84] Creating CNI manager for ""
	I0205 03:16:33.346475   62959 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0205 03:16:33.346486   62959 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0205 03:16:33.346512   62959 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.227 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-024079 NodeName:kubernetes-upgrade-024079 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.227"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.227 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0205 03:16:33.346683   62959 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.227
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-024079"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.227"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.227"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0205 03:16:33.346749   62959 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0205 03:16:33.357552   62959 binaries.go:44] Found k8s binaries, skipping transfer
	I0205 03:16:33.357629   62959 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0205 03:16:33.367716   62959 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0205 03:16:33.390518   62959 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0205 03:16:33.414371   62959 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I0205 03:16:33.434321   62959 ssh_runner.go:195] Run: grep 192.168.61.227	control-plane.minikube.internal$ /etc/hosts
	I0205 03:16:33.440639   62959 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0205 03:16:33.590340   62959 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0205 03:16:33.605122   62959 certs.go:68] Setting up /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kubernetes-upgrade-024079 for IP: 192.168.61.227
	I0205 03:16:33.605149   62959 certs.go:194] generating shared ca certs ...
	I0205 03:16:33.605171   62959 certs.go:226] acquiring lock for ca certs: {Name:mkca3cbb562bd1b517883c3b07fce8acc6cc9038 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:16:33.605366   62959 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20363-12788/.minikube/ca.key
	I0205 03:16:33.605430   62959 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20363-12788/.minikube/proxy-client-ca.key
	I0205 03:16:33.605445   62959 certs.go:256] generating profile certs ...
	I0205 03:16:33.605558   62959 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kubernetes-upgrade-024079/client.key
	I0205 03:16:33.605610   62959 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kubernetes-upgrade-024079/apiserver.key.6ce9ee11
	I0205 03:16:33.605660   62959 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kubernetes-upgrade-024079/proxy-client.key
	I0205 03:16:33.605797   62959 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/19989.pem (1338 bytes)
	W0205 03:16:33.605828   62959 certs.go:480] ignoring /home/jenkins/minikube-integration/20363-12788/.minikube/certs/19989_empty.pem, impossibly tiny 0 bytes
	I0205 03:16:33.605842   62959 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca-key.pem (1679 bytes)
	I0205 03:16:33.605875   62959 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem (1082 bytes)
	I0205 03:16:33.605908   62959 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/cert.pem (1123 bytes)
	I0205 03:16:33.605945   62959 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/key.pem (1675 bytes)
	I0205 03:16:33.606012   62959 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/files/etc/ssl/certs/199892.pem (1708 bytes)
	I0205 03:16:33.606629   62959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0205 03:16:33.639749   62959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0205 03:16:33.670772   62959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0205 03:16:33.698867   62959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0205 03:16:33.725523   62959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kubernetes-upgrade-024079/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0205 03:16:33.754024   62959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kubernetes-upgrade-024079/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0205 03:16:33.786229   62959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kubernetes-upgrade-024079/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0205 03:16:33.817376   62959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kubernetes-upgrade-024079/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0205 03:16:33.848694   62959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/files/etc/ssl/certs/199892.pem --> /usr/share/ca-certificates/199892.pem (1708 bytes)
	I0205 03:16:33.879977   62959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0205 03:16:33.905938   62959 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/certs/19989.pem --> /usr/share/ca-certificates/19989.pem (1338 bytes)
	I0205 03:16:33.935826   62959 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0205 03:16:33.953296   62959 ssh_runner.go:195] Run: openssl version
	I0205 03:16:33.962570   62959 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/199892.pem && ln -fs /usr/share/ca-certificates/199892.pem /etc/ssl/certs/199892.pem"
	I0205 03:16:33.974738   62959 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/199892.pem
	I0205 03:16:33.980137   62959 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb  5 02:11 /usr/share/ca-certificates/199892.pem
	I0205 03:16:33.980222   62959 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/199892.pem
	I0205 03:16:33.986701   62959 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/199892.pem /etc/ssl/certs/3ec20f2e.0"
	I0205 03:16:33.997024   62959 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0205 03:16:34.010226   62959 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0205 03:16:34.015519   62959 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb  5 02:04 /usr/share/ca-certificates/minikubeCA.pem
	I0205 03:16:34.015595   62959 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0205 03:16:34.022061   62959 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0205 03:16:34.032539   62959 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19989.pem && ln -fs /usr/share/ca-certificates/19989.pem /etc/ssl/certs/19989.pem"
	I0205 03:16:34.044263   62959 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19989.pem
	I0205 03:16:34.049274   62959 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb  5 02:11 /usr/share/ca-certificates/19989.pem
	I0205 03:16:34.049367   62959 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19989.pem
	I0205 03:16:34.055530   62959 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/19989.pem /etc/ssl/certs/51391683.0"
	I0205 03:16:34.067848   62959 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0205 03:16:34.074357   62959 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0205 03:16:34.082793   62959 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0205 03:16:34.091165   62959 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0205 03:16:34.098793   62959 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0205 03:16:34.104778   62959 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0205 03:16:34.111089   62959 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0205 03:16:34.119032   62959 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-024079 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:kubernetes-upgrade-024079 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.227 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0205 03:16:34.119166   62959 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0205 03:16:34.119250   62959 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0205 03:16:34.159862   62959 cri.go:89] found id: "a83c3d67e83cb5470a82a09eb35ec73c4e9b3de8b55b7064c28dc1104921e3f4"
	I0205 03:16:34.159893   62959 cri.go:89] found id: "92942caae37c65313af18f399632cdd962aeec826621ce52574496b14f7a24fc"
	I0205 03:16:34.159899   62959 cri.go:89] found id: "347bb099d995527bda269d350e983de1eb64cf9c850e6703fab56545a7f00f19"
	I0205 03:16:34.159904   62959 cri.go:89] found id: "85b8c5a201c31ddc55b3decc7f608d3f51b56a686d53e130559843cfde8c3ad3"
	I0205 03:16:34.159908   62959 cri.go:89] found id: "49587e8b9941f493326472a32b1690684bd70a4b062c471881c979303259d46d"
	I0205 03:16:34.159912   62959 cri.go:89] found id: "2aa187f8b7a453badd6612f85abd1d2507d098900ab3d000ef28d94bf6031ccb"
	I0205 03:16:34.159917   62959 cri.go:89] found id: "3094b5b1dc88cdeb72817bdf39e89f677ec890eab6a20989c9f67aa1b16d6b2f"
	I0205 03:16:34.159921   62959 cri.go:89] found id: ""
	I0205 03:16:34.159972   62959 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
version_upgrade_test.go:277: start after failed upgrade: out/minikube-linux-amd64 start -p kubernetes-upgrade-024079 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2025-02-05 03:28:49.089795808 +0000 UTC m=+5115.000636988
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-024079 -n kubernetes-upgrade-024079
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-024079 -n kubernetes-upgrade-024079: exit status 2 (231.022422ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-024079 logs -n 25
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-253147 sudo                                | calico-253147         | jenkins | v1.35.0 | 05 Feb 25 03:28 UTC | 05 Feb 25 03:28 UTC |
	|         | systemctl status kubelet --all                       |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p calico-253147 sudo                                | calico-253147         | jenkins | v1.35.0 | 05 Feb 25 03:28 UTC | 05 Feb 25 03:28 UTC |
	|         | systemctl cat kubelet                                |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p calico-253147 sudo                                | calico-253147         | jenkins | v1.35.0 | 05 Feb 25 03:28 UTC | 05 Feb 25 03:28 UTC |
	|         | journalctl -xeu kubelet --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p calico-253147 sudo cat                            | calico-253147         | jenkins | v1.35.0 | 05 Feb 25 03:28 UTC | 05 Feb 25 03:28 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                       |         |         |                     |                     |
	| ssh     | -p calico-253147 sudo cat                            | calico-253147         | jenkins | v1.35.0 | 05 Feb 25 03:28 UTC | 05 Feb 25 03:28 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                       |         |         |                     |                     |
	| ssh     | -p calico-253147 sudo                                | calico-253147         | jenkins | v1.35.0 | 05 Feb 25 03:28 UTC |                     |
	|         | systemctl status docker --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p calico-253147 sudo                                | calico-253147         | jenkins | v1.35.0 | 05 Feb 25 03:28 UTC | 05 Feb 25 03:28 UTC |
	|         | systemctl cat docker                                 |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p calico-253147 sudo cat                            | calico-253147         | jenkins | v1.35.0 | 05 Feb 25 03:28 UTC | 05 Feb 25 03:28 UTC |
	|         | /etc/docker/daemon.json                              |                       |         |         |                     |                     |
	| ssh     | -p calico-253147 sudo docker                         | calico-253147         | jenkins | v1.35.0 | 05 Feb 25 03:28 UTC |                     |
	|         | system info                                          |                       |         |         |                     |                     |
	| ssh     | -p calico-253147 sudo                                | calico-253147         | jenkins | v1.35.0 | 05 Feb 25 03:28 UTC |                     |
	|         | systemctl status cri-docker                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p calico-253147 sudo                                | calico-253147         | jenkins | v1.35.0 | 05 Feb 25 03:28 UTC | 05 Feb 25 03:28 UTC |
	|         | systemctl cat cri-docker                             |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p calico-253147 sudo cat                            | calico-253147         | jenkins | v1.35.0 | 05 Feb 25 03:28 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |         |         |                     |                     |
	| ssh     | -p calico-253147 sudo cat                            | calico-253147         | jenkins | v1.35.0 | 05 Feb 25 03:28 UTC | 05 Feb 25 03:28 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |         |         |                     |                     |
	| ssh     | -p calico-253147 sudo                                | calico-253147         | jenkins | v1.35.0 | 05 Feb 25 03:28 UTC | 05 Feb 25 03:28 UTC |
	|         | cri-dockerd --version                                |                       |         |         |                     |                     |
	| ssh     | -p calico-253147 sudo                                | calico-253147         | jenkins | v1.35.0 | 05 Feb 25 03:28 UTC |                     |
	|         | systemctl status containerd                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p calico-253147 sudo                                | calico-253147         | jenkins | v1.35.0 | 05 Feb 25 03:28 UTC | 05 Feb 25 03:28 UTC |
	|         | systemctl cat containerd                             |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p calico-253147 sudo cat                            | calico-253147         | jenkins | v1.35.0 | 05 Feb 25 03:28 UTC | 05 Feb 25 03:28 UTC |
	|         | /lib/systemd/system/containerd.service               |                       |         |         |                     |                     |
	| ssh     | -p calico-253147 sudo cat                            | calico-253147         | jenkins | v1.35.0 | 05 Feb 25 03:28 UTC | 05 Feb 25 03:28 UTC |
	|         | /etc/containerd/config.toml                          |                       |         |         |                     |                     |
	| ssh     | -p calico-253147 sudo                                | calico-253147         | jenkins | v1.35.0 | 05 Feb 25 03:28 UTC | 05 Feb 25 03:28 UTC |
	|         | containerd config dump                               |                       |         |         |                     |                     |
	| ssh     | -p calico-253147 sudo                                | calico-253147         | jenkins | v1.35.0 | 05 Feb 25 03:28 UTC | 05 Feb 25 03:28 UTC |
	|         | systemctl status crio --all                          |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p calico-253147 sudo                                | calico-253147         | jenkins | v1.35.0 | 05 Feb 25 03:28 UTC | 05 Feb 25 03:28 UTC |
	|         | systemctl cat crio --no-pager                        |                       |         |         |                     |                     |
	| ssh     | -p calico-253147 sudo find                           | calico-253147         | jenkins | v1.35.0 | 05 Feb 25 03:28 UTC | 05 Feb 25 03:28 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                       |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                       |         |         |                     |                     |
	| ssh     | -p calico-253147 sudo crio                           | calico-253147         | jenkins | v1.35.0 | 05 Feb 25 03:28 UTC | 05 Feb 25 03:28 UTC |
	|         | config                                               |                       |         |         |                     |                     |
	| delete  | -p calico-253147                                     | calico-253147         | jenkins | v1.35.0 | 05 Feb 25 03:28 UTC | 05 Feb 25 03:28 UTC |
	| start   | -p custom-flannel-253147                             | custom-flannel-253147 | jenkins | v1.35.0 | 05 Feb 25 03:28 UTC |                     |
	|         | --memory=3072 --alsologtostderr                      |                       |         |         |                     |                     |
	|         | --wait=true --wait-timeout=15m                       |                       |         |         |                     |                     |
	|         | --cni=testdata/kube-flannel.yaml                     |                       |         |         |                     |                     |
	|         | --driver=kvm2                                        |                       |         |         |                     |                     |
	|         | --container-runtime=crio                             |                       |         |         |                     |                     |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/05 03:28:32
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0205 03:28:32.108165   74569 out.go:345] Setting OutFile to fd 1 ...
	I0205 03:28:32.108460   74569 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 03:28:32.108470   74569 out.go:358] Setting ErrFile to fd 2...
	I0205 03:28:32.108475   74569 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 03:28:32.108701   74569 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20363-12788/.minikube/bin
	I0205 03:28:32.109318   74569 out.go:352] Setting JSON to false
	I0205 03:28:32.110605   74569 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":7863,"bootTime":1738718249,"procs":304,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0205 03:28:32.110698   74569 start.go:139] virtualization: kvm guest
	I0205 03:28:32.112710   74569 out.go:177] * [custom-flannel-253147] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0205 03:28:32.113918   74569 notify.go:220] Checking for updates...
	I0205 03:28:32.113957   74569 out.go:177]   - MINIKUBE_LOCATION=20363
	I0205 03:28:32.115353   74569 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0205 03:28:32.116515   74569 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20363-12788/kubeconfig
	I0205 03:28:32.117661   74569 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20363-12788/.minikube
	I0205 03:28:32.118665   74569 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0205 03:28:32.119698   74569 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0205 03:28:32.121164   74569 config.go:182] Loaded profile config "default-k8s-diff-port-568677": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 03:28:32.121273   74569 config.go:182] Loaded profile config "kubernetes-upgrade-024079": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 03:28:32.121405   74569 config.go:182] Loaded profile config "old-k8s-version-191773": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0205 03:28:32.121541   74569 driver.go:394] Setting default libvirt URI to qemu:///system
	I0205 03:28:32.159551   74569 out.go:177] * Using the kvm2 driver based on user configuration
	I0205 03:28:32.160738   74569 start.go:297] selected driver: kvm2
	I0205 03:28:32.160763   74569 start.go:901] validating driver "kvm2" against <nil>
	I0205 03:28:32.160775   74569 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0205 03:28:32.161485   74569 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0205 03:28:32.161566   74569 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20363-12788/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0205 03:28:32.177757   74569 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0205 03:28:32.177811   74569 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0205 03:28:32.178044   74569 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0205 03:28:32.178073   74569 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0205 03:28:32.178087   74569 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0205 03:28:32.178137   74569 start.go:340] cluster config:
	{Name:custom-flannel-253147 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:custom-flannel-253147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0205 03:28:32.178227   74569 iso.go:125] acquiring lock: {Name:mk486603e8d6546d81b9e7d0a893261360630790 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0205 03:28:32.180387   74569 out.go:177] * Starting "custom-flannel-253147" primary control-plane node in "custom-flannel-253147" cluster
	I0205 03:28:32.181472   74569 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0205 03:28:32.181503   74569 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20363-12788/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0205 03:28:32.181512   74569 cache.go:56] Caching tarball of preloaded images
	I0205 03:28:32.181582   74569 preload.go:172] Found /home/jenkins/minikube-integration/20363-12788/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0205 03:28:32.181594   74569 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0205 03:28:32.181682   74569 profile.go:143] Saving config to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/custom-flannel-253147/config.json ...
	I0205 03:28:32.181703   74569 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/custom-flannel-253147/config.json: {Name:mkd1f3130db71ca5f574a45b7b36760071289aa8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:28:32.181827   74569 start.go:360] acquireMachinesLock for custom-flannel-253147: {Name:mka859d8706e94e04a549fcebb98cdac86bfe5a9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0205 03:28:32.181855   74569 start.go:364] duration metric: took 14.987µs to acquireMachinesLock for "custom-flannel-253147"
	I0205 03:28:32.181870   74569 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-253147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:custom-flanne
l-253147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0205 03:28:32.181930   74569 start.go:125] createHost starting for "" (driver="kvm2")
	I0205 03:28:29.338032   68832 pod_ready.go:103] pod "metrics-server-f79f97bbb-k9q9v" in "kube-system" namespace has status "Ready":"False"
	I0205 03:28:31.346739   68832 pod_ready.go:103] pod "metrics-server-f79f97bbb-k9q9v" in "kube-system" namespace has status "Ready":"False"
	I0205 03:28:32.183253   74569 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0205 03:28:32.183396   74569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:28:32.183439   74569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:28:32.198548   74569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41401
	I0205 03:28:32.198997   74569 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:28:32.199554   74569 main.go:141] libmachine: Using API Version  1
	I0205 03:28:32.199577   74569 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:28:32.199937   74569 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:28:32.200162   74569 main.go:141] libmachine: (custom-flannel-253147) Calling .GetMachineName
	I0205 03:28:32.200297   74569 main.go:141] libmachine: (custom-flannel-253147) Calling .DriverName
	I0205 03:28:32.200471   74569 start.go:159] libmachine.API.Create for "custom-flannel-253147" (driver="kvm2")
	I0205 03:28:32.200502   74569 client.go:168] LocalClient.Create starting
	I0205 03:28:32.200535   74569 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem
	I0205 03:28:32.200569   74569 main.go:141] libmachine: Decoding PEM data...
	I0205 03:28:32.200588   74569 main.go:141] libmachine: Parsing certificate...
	I0205 03:28:32.200649   74569 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20363-12788/.minikube/certs/cert.pem
	I0205 03:28:32.200668   74569 main.go:141] libmachine: Decoding PEM data...
	I0205 03:28:32.200680   74569 main.go:141] libmachine: Parsing certificate...
	I0205 03:28:32.200695   74569 main.go:141] libmachine: Running pre-create checks...
	I0205 03:28:32.200708   74569 main.go:141] libmachine: (custom-flannel-253147) Calling .PreCreateCheck
	I0205 03:28:32.201031   74569 main.go:141] libmachine: (custom-flannel-253147) Calling .GetConfigRaw
	I0205 03:28:32.201454   74569 main.go:141] libmachine: Creating machine...
	I0205 03:28:32.201468   74569 main.go:141] libmachine: (custom-flannel-253147) Calling .Create
	I0205 03:28:32.201584   74569 main.go:141] libmachine: (custom-flannel-253147) creating KVM machine...
	I0205 03:28:32.201606   74569 main.go:141] libmachine: (custom-flannel-253147) creating network...
	I0205 03:28:32.202906   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | found existing default KVM network
	I0205 03:28:32.204179   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | I0205 03:28:32.203982   74592 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:e6:1a:05} reservation:<nil>}
	I0205 03:28:32.205371   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | I0205 03:28:32.205213   74592 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00027e980}
	I0205 03:28:32.205403   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | created network xml: 
	I0205 03:28:32.205457   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | <network>
	I0205 03:28:32.205482   74569 main.go:141] libmachine: (custom-flannel-253147) DBG |   <name>mk-custom-flannel-253147</name>
	I0205 03:28:32.205497   74569 main.go:141] libmachine: (custom-flannel-253147) DBG |   <dns enable='no'/>
	I0205 03:28:32.205510   74569 main.go:141] libmachine: (custom-flannel-253147) DBG |   
	I0205 03:28:32.205525   74569 main.go:141] libmachine: (custom-flannel-253147) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0205 03:28:32.205541   74569 main.go:141] libmachine: (custom-flannel-253147) DBG |     <dhcp>
	I0205 03:28:32.205558   74569 main.go:141] libmachine: (custom-flannel-253147) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0205 03:28:32.205569   74569 main.go:141] libmachine: (custom-flannel-253147) DBG |     </dhcp>
	I0205 03:28:32.205581   74569 main.go:141] libmachine: (custom-flannel-253147) DBG |   </ip>
	I0205 03:28:32.205591   74569 main.go:141] libmachine: (custom-flannel-253147) DBG |   
	I0205 03:28:32.205601   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | </network>
	I0205 03:28:32.205611   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | 
	I0205 03:28:32.210688   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | trying to create private KVM network mk-custom-flannel-253147 192.168.50.0/24...
	I0205 03:28:32.280329   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | private KVM network mk-custom-flannel-253147 192.168.50.0/24 created
	I0205 03:28:32.280381   74569 main.go:141] libmachine: (custom-flannel-253147) setting up store path in /home/jenkins/minikube-integration/20363-12788/.minikube/machines/custom-flannel-253147 ...
	I0205 03:28:32.280407   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | I0205 03:28:32.280299   74592 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20363-12788/.minikube
	I0205 03:28:32.280425   74569 main.go:141] libmachine: (custom-flannel-253147) building disk image from file:///home/jenkins/minikube-integration/20363-12788/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0205 03:28:32.280460   74569 main.go:141] libmachine: (custom-flannel-253147) Downloading /home/jenkins/minikube-integration/20363-12788/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20363-12788/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0205 03:28:32.537008   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | I0205 03:28:32.536888   74592 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20363-12788/.minikube/machines/custom-flannel-253147/id_rsa...
	I0205 03:28:32.899485   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | I0205 03:28:32.899313   74592 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20363-12788/.minikube/machines/custom-flannel-253147/custom-flannel-253147.rawdisk...
	I0205 03:28:32.899520   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | Writing magic tar header
	I0205 03:28:32.899531   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | Writing SSH key tar header
	I0205 03:28:32.899540   74569 main.go:141] libmachine: (custom-flannel-253147) setting executable bit set on /home/jenkins/minikube-integration/20363-12788/.minikube/machines/custom-flannel-253147 (perms=drwx------)
	I0205 03:28:32.899548   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | I0205 03:28:32.899420   74592 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20363-12788/.minikube/machines/custom-flannel-253147 ...
	I0205 03:28:32.899556   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20363-12788/.minikube/machines/custom-flannel-253147
	I0205 03:28:32.899562   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20363-12788/.minikube/machines
	I0205 03:28:32.899572   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20363-12788/.minikube
	I0205 03:28:32.899578   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20363-12788
	I0205 03:28:32.899585   74569 main.go:141] libmachine: (custom-flannel-253147) setting executable bit set on /home/jenkins/minikube-integration/20363-12788/.minikube/machines (perms=drwxr-xr-x)
	I0205 03:28:32.899591   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0205 03:28:32.899606   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | checking permissions on dir: /home/jenkins
	I0205 03:28:32.899612   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | checking permissions on dir: /home
	I0205 03:28:32.899620   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | skipping /home - not owner
	I0205 03:28:32.899632   74569 main.go:141] libmachine: (custom-flannel-253147) setting executable bit set on /home/jenkins/minikube-integration/20363-12788/.minikube (perms=drwxr-xr-x)
	I0205 03:28:32.899638   74569 main.go:141] libmachine: (custom-flannel-253147) setting executable bit set on /home/jenkins/minikube-integration/20363-12788 (perms=drwxrwxr-x)
	I0205 03:28:32.899650   74569 main.go:141] libmachine: (custom-flannel-253147) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0205 03:28:32.899656   74569 main.go:141] libmachine: (custom-flannel-253147) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0205 03:28:32.899681   74569 main.go:141] libmachine: (custom-flannel-253147) creating domain...
	I0205 03:28:32.900918   74569 main.go:141] libmachine: (custom-flannel-253147) define libvirt domain using xml: 
	I0205 03:28:32.900935   74569 main.go:141] libmachine: (custom-flannel-253147) <domain type='kvm'>
	I0205 03:28:32.900941   74569 main.go:141] libmachine: (custom-flannel-253147)   <name>custom-flannel-253147</name>
	I0205 03:28:32.900946   74569 main.go:141] libmachine: (custom-flannel-253147)   <memory unit='MiB'>3072</memory>
	I0205 03:28:32.900950   74569 main.go:141] libmachine: (custom-flannel-253147)   <vcpu>2</vcpu>
	I0205 03:28:32.900954   74569 main.go:141] libmachine: (custom-flannel-253147)   <features>
	I0205 03:28:32.900960   74569 main.go:141] libmachine: (custom-flannel-253147)     <acpi/>
	I0205 03:28:32.900971   74569 main.go:141] libmachine: (custom-flannel-253147)     <apic/>
	I0205 03:28:32.900979   74569 main.go:141] libmachine: (custom-flannel-253147)     <pae/>
	I0205 03:28:32.900983   74569 main.go:141] libmachine: (custom-flannel-253147)     
	I0205 03:28:32.900989   74569 main.go:141] libmachine: (custom-flannel-253147)   </features>
	I0205 03:28:32.900996   74569 main.go:141] libmachine: (custom-flannel-253147)   <cpu mode='host-passthrough'>
	I0205 03:28:32.901001   74569 main.go:141] libmachine: (custom-flannel-253147)   
	I0205 03:28:32.901008   74569 main.go:141] libmachine: (custom-flannel-253147)   </cpu>
	I0205 03:28:32.901013   74569 main.go:141] libmachine: (custom-flannel-253147)   <os>
	I0205 03:28:32.901020   74569 main.go:141] libmachine: (custom-flannel-253147)     <type>hvm</type>
	I0205 03:28:32.901025   74569 main.go:141] libmachine: (custom-flannel-253147)     <boot dev='cdrom'/>
	I0205 03:28:32.901035   74569 main.go:141] libmachine: (custom-flannel-253147)     <boot dev='hd'/>
	I0205 03:28:32.901070   74569 main.go:141] libmachine: (custom-flannel-253147)     <bootmenu enable='no'/>
	I0205 03:28:32.901096   74569 main.go:141] libmachine: (custom-flannel-253147)   </os>
	I0205 03:28:32.901107   74569 main.go:141] libmachine: (custom-flannel-253147)   <devices>
	I0205 03:28:32.901137   74569 main.go:141] libmachine: (custom-flannel-253147)     <disk type='file' device='cdrom'>
	I0205 03:28:32.901154   74569 main.go:141] libmachine: (custom-flannel-253147)       <source file='/home/jenkins/minikube-integration/20363-12788/.minikube/machines/custom-flannel-253147/boot2docker.iso'/>
	I0205 03:28:32.901166   74569 main.go:141] libmachine: (custom-flannel-253147)       <target dev='hdc' bus='scsi'/>
	I0205 03:28:32.901174   74569 main.go:141] libmachine: (custom-flannel-253147)       <readonly/>
	I0205 03:28:32.901186   74569 main.go:141] libmachine: (custom-flannel-253147)     </disk>
	I0205 03:28:32.901196   74569 main.go:141] libmachine: (custom-flannel-253147)     <disk type='file' device='disk'>
	I0205 03:28:32.901208   74569 main.go:141] libmachine: (custom-flannel-253147)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0205 03:28:32.901241   74569 main.go:141] libmachine: (custom-flannel-253147)       <source file='/home/jenkins/minikube-integration/20363-12788/.minikube/machines/custom-flannel-253147/custom-flannel-253147.rawdisk'/>
	I0205 03:28:32.901270   74569 main.go:141] libmachine: (custom-flannel-253147)       <target dev='hda' bus='virtio'/>
	I0205 03:28:32.901284   74569 main.go:141] libmachine: (custom-flannel-253147)     </disk>
	I0205 03:28:32.901297   74569 main.go:141] libmachine: (custom-flannel-253147)     <interface type='network'>
	I0205 03:28:32.901316   74569 main.go:141] libmachine: (custom-flannel-253147)       <source network='mk-custom-flannel-253147'/>
	I0205 03:28:32.901334   74569 main.go:141] libmachine: (custom-flannel-253147)       <model type='virtio'/>
	I0205 03:28:32.901374   74569 main.go:141] libmachine: (custom-flannel-253147)     </interface>
	I0205 03:28:32.901384   74569 main.go:141] libmachine: (custom-flannel-253147)     <interface type='network'>
	I0205 03:28:32.901404   74569 main.go:141] libmachine: (custom-flannel-253147)       <source network='default'/>
	I0205 03:28:32.901416   74569 main.go:141] libmachine: (custom-flannel-253147)       <model type='virtio'/>
	I0205 03:28:32.901427   74569 main.go:141] libmachine: (custom-flannel-253147)     </interface>
	I0205 03:28:32.901435   74569 main.go:141] libmachine: (custom-flannel-253147)     <serial type='pty'>
	I0205 03:28:32.901447   74569 main.go:141] libmachine: (custom-flannel-253147)       <target port='0'/>
	I0205 03:28:32.901461   74569 main.go:141] libmachine: (custom-flannel-253147)     </serial>
	I0205 03:28:32.901480   74569 main.go:141] libmachine: (custom-flannel-253147)     <console type='pty'>
	I0205 03:28:32.901499   74569 main.go:141] libmachine: (custom-flannel-253147)       <target type='serial' port='0'/>
	I0205 03:28:32.901512   74569 main.go:141] libmachine: (custom-flannel-253147)     </console>
	I0205 03:28:32.901523   74569 main.go:141] libmachine: (custom-flannel-253147)     <rng model='virtio'>
	I0205 03:28:32.901533   74569 main.go:141] libmachine: (custom-flannel-253147)       <backend model='random'>/dev/random</backend>
	I0205 03:28:32.901565   74569 main.go:141] libmachine: (custom-flannel-253147)     </rng>
	I0205 03:28:32.901588   74569 main.go:141] libmachine: (custom-flannel-253147)     
	I0205 03:28:32.901608   74569 main.go:141] libmachine: (custom-flannel-253147)     
	I0205 03:28:32.901619   74569 main.go:141] libmachine: (custom-flannel-253147)   </devices>
	I0205 03:28:32.901627   74569 main.go:141] libmachine: (custom-flannel-253147) </domain>
	I0205 03:28:32.901640   74569 main.go:141] libmachine: (custom-flannel-253147) 
	I0205 03:28:32.905807   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | domain custom-flannel-253147 has defined MAC address 52:54:00:50:47:bf in network default
	I0205 03:28:32.906365   74569 main.go:141] libmachine: (custom-flannel-253147) starting domain...
	I0205 03:28:32.906387   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | domain custom-flannel-253147 has defined MAC address 52:54:00:5f:8a:70 in network mk-custom-flannel-253147
	I0205 03:28:32.906416   74569 main.go:141] libmachine: (custom-flannel-253147) ensuring networks are active...
	I0205 03:28:32.907033   74569 main.go:141] libmachine: (custom-flannel-253147) Ensuring network default is active
	I0205 03:28:32.907316   74569 main.go:141] libmachine: (custom-flannel-253147) Ensuring network mk-custom-flannel-253147 is active
	I0205 03:28:32.907869   74569 main.go:141] libmachine: (custom-flannel-253147) getting domain XML...
	I0205 03:28:32.908569   74569 main.go:141] libmachine: (custom-flannel-253147) creating domain...
	I0205 03:28:34.138820   74569 main.go:141] libmachine: (custom-flannel-253147) waiting for IP...
	I0205 03:28:34.139572   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | domain custom-flannel-253147 has defined MAC address 52:54:00:5f:8a:70 in network mk-custom-flannel-253147
	I0205 03:28:34.140065   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | unable to find current IP address of domain custom-flannel-253147 in network mk-custom-flannel-253147
	I0205 03:28:34.140151   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | I0205 03:28:34.140053   74592 retry.go:31] will retry after 286.520923ms: waiting for domain to come up
	I0205 03:28:34.428474   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | domain custom-flannel-253147 has defined MAC address 52:54:00:5f:8a:70 in network mk-custom-flannel-253147
	I0205 03:28:34.428943   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | unable to find current IP address of domain custom-flannel-253147 in network mk-custom-flannel-253147
	I0205 03:28:34.428973   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | I0205 03:28:34.428892   74592 retry.go:31] will retry after 251.533205ms: waiting for domain to come up
	I0205 03:28:34.682280   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | domain custom-flannel-253147 has defined MAC address 52:54:00:5f:8a:70 in network mk-custom-flannel-253147
	I0205 03:28:34.682746   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | unable to find current IP address of domain custom-flannel-253147 in network mk-custom-flannel-253147
	I0205 03:28:34.682771   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | I0205 03:28:34.682715   74592 retry.go:31] will retry after 338.062252ms: waiting for domain to come up
	I0205 03:28:35.022331   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | domain custom-flannel-253147 has defined MAC address 52:54:00:5f:8a:70 in network mk-custom-flannel-253147
	I0205 03:28:35.022918   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | unable to find current IP address of domain custom-flannel-253147 in network mk-custom-flannel-253147
	I0205 03:28:35.022950   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | I0205 03:28:35.022868   74592 retry.go:31] will retry after 472.635665ms: waiting for domain to come up
	I0205 03:28:35.497707   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | domain custom-flannel-253147 has defined MAC address 52:54:00:5f:8a:70 in network mk-custom-flannel-253147
	I0205 03:28:35.498294   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | unable to find current IP address of domain custom-flannel-253147 in network mk-custom-flannel-253147
	I0205 03:28:35.498329   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | I0205 03:28:35.498241   74592 retry.go:31] will retry after 633.366601ms: waiting for domain to come up
	I0205 03:28:36.132931   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | domain custom-flannel-253147 has defined MAC address 52:54:00:5f:8a:70 in network mk-custom-flannel-253147
	I0205 03:28:36.133466   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | unable to find current IP address of domain custom-flannel-253147 in network mk-custom-flannel-253147
	I0205 03:28:36.133523   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | I0205 03:28:36.133452   74592 retry.go:31] will retry after 729.96125ms: waiting for domain to come up
	I0205 03:28:36.865371   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | domain custom-flannel-253147 has defined MAC address 52:54:00:5f:8a:70 in network mk-custom-flannel-253147
	I0205 03:28:36.865896   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | unable to find current IP address of domain custom-flannel-253147 in network mk-custom-flannel-253147
	I0205 03:28:36.865914   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | I0205 03:28:36.865849   74592 retry.go:31] will retry after 1.036974871s: waiting for domain to come up
	I0205 03:28:33.838078   68832 pod_ready.go:103] pod "metrics-server-f79f97bbb-k9q9v" in "kube-system" namespace has status "Ready":"False"
	I0205 03:28:36.337952   68832 pod_ready.go:103] pod "metrics-server-f79f97bbb-k9q9v" in "kube-system" namespace has status "Ready":"False"
	I0205 03:28:38.338092   68832 pod_ready.go:103] pod "metrics-server-f79f97bbb-k9q9v" in "kube-system" namespace has status "Ready":"False"
	I0205 03:28:37.903869   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | domain custom-flannel-253147 has defined MAC address 52:54:00:5f:8a:70 in network mk-custom-flannel-253147
	I0205 03:28:37.904431   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | unable to find current IP address of domain custom-flannel-253147 in network mk-custom-flannel-253147
	I0205 03:28:37.904453   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | I0205 03:28:37.904396   74592 retry.go:31] will retry after 1.017667291s: waiting for domain to come up
	I0205 03:28:38.923677   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | domain custom-flannel-253147 has defined MAC address 52:54:00:5f:8a:70 in network mk-custom-flannel-253147
	I0205 03:28:38.924211   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | unable to find current IP address of domain custom-flannel-253147 in network mk-custom-flannel-253147
	I0205 03:28:38.924237   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | I0205 03:28:38.924189   74592 retry.go:31] will retry after 1.669894805s: waiting for domain to come up
	I0205 03:28:40.595905   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | domain custom-flannel-253147 has defined MAC address 52:54:00:5f:8a:70 in network mk-custom-flannel-253147
	I0205 03:28:40.596424   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | unable to find current IP address of domain custom-flannel-253147 in network mk-custom-flannel-253147
	I0205 03:28:40.596450   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | I0205 03:28:40.596391   74592 retry.go:31] will retry after 1.897446215s: waiting for domain to come up
	I0205 03:28:40.837934   68832 pod_ready.go:103] pod "metrics-server-f79f97bbb-k9q9v" in "kube-system" namespace has status "Ready":"False"
	I0205 03:28:42.842862   68832 pod_ready.go:103] pod "metrics-server-f79f97bbb-k9q9v" in "kube-system" namespace has status "Ready":"False"
	I0205 03:28:42.495033   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | domain custom-flannel-253147 has defined MAC address 52:54:00:5f:8a:70 in network mk-custom-flannel-253147
	I0205 03:28:42.495628   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | unable to find current IP address of domain custom-flannel-253147 in network mk-custom-flannel-253147
	I0205 03:28:42.495661   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | I0205 03:28:42.495580   74592 retry.go:31] will retry after 1.974768478s: waiting for domain to come up
	I0205 03:28:44.471793   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | domain custom-flannel-253147 has defined MAC address 52:54:00:5f:8a:70 in network mk-custom-flannel-253147
	I0205 03:28:44.472279   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | unable to find current IP address of domain custom-flannel-253147 in network mk-custom-flannel-253147
	I0205 03:28:44.472311   74569 main.go:141] libmachine: (custom-flannel-253147) DBG | I0205 03:28:44.472243   74592 retry.go:31] will retry after 2.647127228s: waiting for domain to come up
	I0205 03:28:48.259762   62959 kubeadm.go:310] error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	I0205 03:28:48.259877   62959 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0205 03:28:48.261773   62959 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0205 03:28:48.261873   62959 kubeadm.go:310] [preflight] Running pre-flight checks
	I0205 03:28:48.261992   62959 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0205 03:28:48.262124   62959 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0205 03:28:48.262256   62959 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0205 03:28:48.262363   62959 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0205 03:28:48.263820   62959 out.go:235]   - Generating certificates and keys ...
	I0205 03:28:48.263938   62959 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0205 03:28:48.264033   62959 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0205 03:28:48.264126   62959 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0205 03:28:48.264176   62959 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0205 03:28:48.264269   62959 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0205 03:28:48.264339   62959 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0205 03:28:48.264438   62959 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0205 03:28:48.264535   62959 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0205 03:28:48.264617   62959 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0205 03:28:48.264723   62959 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0205 03:28:48.264782   62959 kubeadm.go:310] [certs] Using the existing "sa" key
	I0205 03:28:48.264831   62959 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0205 03:28:48.264879   62959 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0205 03:28:48.264927   62959 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0205 03:28:48.264973   62959 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0205 03:28:48.265041   62959 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0205 03:28:48.265127   62959 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0205 03:28:48.265234   62959 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0205 03:28:48.265330   62959 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0205 03:28:45.336687   68832 pod_ready.go:103] pod "metrics-server-f79f97bbb-k9q9v" in "kube-system" namespace has status "Ready":"False"
	I0205 03:28:47.337332   68832 pod_ready.go:103] pod "metrics-server-f79f97bbb-k9q9v" in "kube-system" namespace has status "Ready":"False"
	I0205 03:28:48.266714   62959 out.go:235]   - Booting up control plane ...
	I0205 03:28:48.266810   62959 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0205 03:28:48.266893   62959 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0205 03:28:48.266977   62959 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0205 03:28:48.267091   62959 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0205 03:28:48.267198   62959 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0205 03:28:48.267260   62959 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0205 03:28:48.267403   62959 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0205 03:28:48.267536   62959 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0205 03:28:48.267619   62959 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.160572ms
	I0205 03:28:48.267712   62959 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0205 03:28:48.267768   62959 kubeadm.go:310] [api-check] The API server is not healthy after 4m0.000218961s
	I0205 03:28:48.267775   62959 kubeadm.go:310] 
	I0205 03:28:48.267819   62959 kubeadm.go:310] Unfortunately, an error has occurred:
	I0205 03:28:48.267864   62959 kubeadm.go:310] 	context deadline exceeded
	I0205 03:28:48.267874   62959 kubeadm.go:310] 
	I0205 03:28:48.267926   62959 kubeadm.go:310] This error is likely caused by:
	I0205 03:28:48.267981   62959 kubeadm.go:310] 	- The kubelet is not running
	I0205 03:28:48.268112   62959 kubeadm.go:310] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0205 03:28:48.268122   62959 kubeadm.go:310] 
	I0205 03:28:48.268237   62959 kubeadm.go:310] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0205 03:28:48.268293   62959 kubeadm.go:310] 	- 'systemctl status kubelet'
	I0205 03:28:48.268345   62959 kubeadm.go:310] 	- 'journalctl -xeu kubelet'
	I0205 03:28:48.268356   62959 kubeadm.go:310] 
	I0205 03:28:48.268490   62959 kubeadm.go:310] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0205 03:28:48.268588   62959 kubeadm.go:310] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0205 03:28:48.268690   62959 kubeadm.go:310] Here is one example how you may list all running Kubernetes containers by using crictl:
	I0205 03:28:48.268781   62959 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0205 03:28:48.268845   62959 kubeadm.go:310] 	Once you have found the failing container, you can inspect its logs with:
	I0205 03:28:48.268947   62959 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I0205 03:28:48.268994   62959 kubeadm.go:394] duration metric: took 12m14.149971485s to StartCluster
	I0205 03:28:48.269032   62959 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:28:48.269083   62959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:28:48.315840   62959 cri.go:89] found id: "e81a95eed4c06e397c20c04d396a9be6bc84e97be5b6e8c603fdc1df8c4a5a6d"
	I0205 03:28:48.315869   62959 cri.go:89] found id: ""
	I0205 03:28:48.315880   62959 logs.go:282] 1 containers: [e81a95eed4c06e397c20c04d396a9be6bc84e97be5b6e8c603fdc1df8c4a5a6d]
	I0205 03:28:48.315942   62959 ssh_runner.go:195] Run: which crictl
	I0205 03:28:48.320502   62959 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:28:48.320566   62959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:28:48.357502   62959 cri.go:89] found id: ""
	I0205 03:28:48.357534   62959 logs.go:282] 0 containers: []
	W0205 03:28:48.357545   62959 logs.go:284] No container was found matching "etcd"
	I0205 03:28:48.357556   62959 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:28:48.357628   62959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:28:48.391082   62959 cri.go:89] found id: ""
	I0205 03:28:48.391113   62959 logs.go:282] 0 containers: []
	W0205 03:28:48.391121   62959 logs.go:284] No container was found matching "coredns"
	I0205 03:28:48.391127   62959 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:28:48.391178   62959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:28:48.424650   62959 cri.go:89] found id: ""
	I0205 03:28:48.424683   62959 logs.go:282] 0 containers: []
	W0205 03:28:48.424691   62959 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:28:48.424697   62959 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:28:48.424750   62959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:28:48.459050   62959 cri.go:89] found id: ""
	I0205 03:28:48.459076   62959 logs.go:282] 0 containers: []
	W0205 03:28:48.459083   62959 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:28:48.459090   62959 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:28:48.459141   62959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:28:48.493222   62959 cri.go:89] found id: ""
	I0205 03:28:48.493248   62959 logs.go:282] 0 containers: []
	W0205 03:28:48.493255   62959 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:28:48.493267   62959 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:28:48.493329   62959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:28:48.528601   62959 cri.go:89] found id: ""
	I0205 03:28:48.528626   62959 logs.go:282] 0 containers: []
	W0205 03:28:48.528636   62959 logs.go:284] No container was found matching "kindnet"
	I0205 03:28:48.528643   62959 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0205 03:28:48.528706   62959 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0205 03:28:48.562674   62959 cri.go:89] found id: ""
	I0205 03:28:48.562701   62959 logs.go:282] 0 containers: []
	W0205 03:28:48.562710   62959 logs.go:284] No container was found matching "storage-provisioner"
	I0205 03:28:48.562718   62959 logs.go:123] Gathering logs for kubelet ...
	I0205 03:28:48.562730   62959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:28:48.728452   62959 logs.go:123] Gathering logs for dmesg ...
	I0205 03:28:48.728489   62959 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:28:48.743135   62959 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:28:48.743163   62959 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:28:48.816547   62959 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:28:48.816575   62959 logs.go:123] Gathering logs for kube-apiserver [e81a95eed4c06e397c20c04d396a9be6bc84e97be5b6e8c603fdc1df8c4a5a6d] ...
	I0205 03:28:48.816589   62959 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e81a95eed4c06e397c20c04d396a9be6bc84e97be5b6e8c603fdc1df8c4a5a6d"
	I0205 03:28:48.852440   62959 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:28:48.852468   62959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:28:49.033301   62959 logs.go:123] Gathering logs for container status ...
	I0205 03:28:49.033344   62959 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0205 03:28:49.071418   62959 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.32.1
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.160572ms
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.000218961s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0205 03:28:49.071474   62959 out.go:270] * 
	W0205 03:28:49.071538   62959 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.32.1
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.160572ms
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.000218961s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0205 03:28:49.071566   62959 out.go:270] * 
	W0205 03:28:49.072448   62959 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0205 03:28:49.075099   62959 out.go:201] 
	W0205 03:28:49.076146   62959 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.32.1
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.160572ms
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.000218961s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0205 03:28:49.076229   62959 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0205 03:28:49.076268   62959 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0205 03:28:49.077605   62959 out.go:201] 
	
	
	==> CRI-O <==
	Feb 05 03:28:49 kubernetes-upgrade-024079 crio[2258]: time="2025-02-05 03:28:49.653878345Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738726129653856600,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=002b3fec-6017-4d1d-bcca-b858512d5d60 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 03:28:49 kubernetes-upgrade-024079 crio[2258]: time="2025-02-05 03:28:49.654387164Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1a0fb876-d2f3-4868-b5e2-b2b760141dda name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:28:49 kubernetes-upgrade-024079 crio[2258]: time="2025-02-05 03:28:49.654488609Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1a0fb876-d2f3-4868-b5e2-b2b760141dda name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:28:49 kubernetes-upgrade-024079 crio[2258]: time="2025-02-05 03:28:49.654552884Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e81a95eed4c06e397c20c04d396a9be6bc84e97be5b6e8c603fdc1df8c4a5a6d,PodSandboxId:41174acb8746a71bb610074a92cc0580c2e776fb36fd3b47528050d344ecb9ff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1738726063183839387,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-024079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1854e96712f908395a65b47710f3f7e1,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 15,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1a0fb876-d2f3-4868-b5e2-b2b760141dda name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:28:49 kubernetes-upgrade-024079 crio[2258]: time="2025-02-05 03:28:49.689959439Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=01597efd-82b0-4324-8561-db9039c0313d name=/runtime.v1.RuntimeService/Version
	Feb 05 03:28:49 kubernetes-upgrade-024079 crio[2258]: time="2025-02-05 03:28:49.690041366Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=01597efd-82b0-4324-8561-db9039c0313d name=/runtime.v1.RuntimeService/Version
	Feb 05 03:28:49 kubernetes-upgrade-024079 crio[2258]: time="2025-02-05 03:28:49.691505193Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=03de1720-f728-45e3-ac4e-d5018a47be8a name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 03:28:49 kubernetes-upgrade-024079 crio[2258]: time="2025-02-05 03:28:49.691872008Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738726129691848057,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=03de1720-f728-45e3-ac4e-d5018a47be8a name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 03:28:49 kubernetes-upgrade-024079 crio[2258]: time="2025-02-05 03:28:49.692427185Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7fc08638-9a0b-48a8-98bf-7d3625a33d80 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:28:49 kubernetes-upgrade-024079 crio[2258]: time="2025-02-05 03:28:49.692523051Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7fc08638-9a0b-48a8-98bf-7d3625a33d80 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:28:49 kubernetes-upgrade-024079 crio[2258]: time="2025-02-05 03:28:49.692581538Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e81a95eed4c06e397c20c04d396a9be6bc84e97be5b6e8c603fdc1df8c4a5a6d,PodSandboxId:41174acb8746a71bb610074a92cc0580c2e776fb36fd3b47528050d344ecb9ff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1738726063183839387,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-024079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1854e96712f908395a65b47710f3f7e1,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 15,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7fc08638-9a0b-48a8-98bf-7d3625a33d80 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:28:49 kubernetes-upgrade-024079 crio[2258]: time="2025-02-05 03:28:49.724111555Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a311255c-f13c-4e7c-947b-d01bc72ff75b name=/runtime.v1.RuntimeService/Version
	Feb 05 03:28:49 kubernetes-upgrade-024079 crio[2258]: time="2025-02-05 03:28:49.724196040Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a311255c-f13c-4e7c-947b-d01bc72ff75b name=/runtime.v1.RuntimeService/Version
	Feb 05 03:28:49 kubernetes-upgrade-024079 crio[2258]: time="2025-02-05 03:28:49.725554551Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e4fcac9f-7aa3-44e3-a5c5-78f5f0676161 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 03:28:49 kubernetes-upgrade-024079 crio[2258]: time="2025-02-05 03:28:49.725918965Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738726129725896200,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e4fcac9f-7aa3-44e3-a5c5-78f5f0676161 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 03:28:49 kubernetes-upgrade-024079 crio[2258]: time="2025-02-05 03:28:49.726521863Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2dbb05e0-0a5a-427e-a7ec-0a364b56324b name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:28:49 kubernetes-upgrade-024079 crio[2258]: time="2025-02-05 03:28:49.726581443Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2dbb05e0-0a5a-427e-a7ec-0a364b56324b name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:28:49 kubernetes-upgrade-024079 crio[2258]: time="2025-02-05 03:28:49.726637158Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e81a95eed4c06e397c20c04d396a9be6bc84e97be5b6e8c603fdc1df8c4a5a6d,PodSandboxId:41174acb8746a71bb610074a92cc0580c2e776fb36fd3b47528050d344ecb9ff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1738726063183839387,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-024079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1854e96712f908395a65b47710f3f7e1,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 15,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2dbb05e0-0a5a-427e-a7ec-0a364b56324b name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:28:49 kubernetes-upgrade-024079 crio[2258]: time="2025-02-05 03:28:49.760287790Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9da368ac-a3ff-40fd-91bb-cef0e16ca66a name=/runtime.v1.RuntimeService/Version
	Feb 05 03:28:49 kubernetes-upgrade-024079 crio[2258]: time="2025-02-05 03:28:49.760370731Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9da368ac-a3ff-40fd-91bb-cef0e16ca66a name=/runtime.v1.RuntimeService/Version
	Feb 05 03:28:49 kubernetes-upgrade-024079 crio[2258]: time="2025-02-05 03:28:49.761726595Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bfd8e3b9-483a-45de-bb89-a637ca7aa083 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 03:28:49 kubernetes-upgrade-024079 crio[2258]: time="2025-02-05 03:28:49.762072117Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738726129762048684,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bfd8e3b9-483a-45de-bb89-a637ca7aa083 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 03:28:49 kubernetes-upgrade-024079 crio[2258]: time="2025-02-05 03:28:49.762698311Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c2c96ea3-032f-48fd-a1d2-3c939391f431 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:28:49 kubernetes-upgrade-024079 crio[2258]: time="2025-02-05 03:28:49.762760864Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c2c96ea3-032f-48fd-a1d2-3c939391f431 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:28:49 kubernetes-upgrade-024079 crio[2258]: time="2025-02-05 03:28:49.762816172Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e81a95eed4c06e397c20c04d396a9be6bc84e97be5b6e8c603fdc1df8c4a5a6d,PodSandboxId:41174acb8746a71bb610074a92cc0580c2e776fb36fd3b47528050d344ecb9ff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1738726063183839387,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-024079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1854e96712f908395a65b47710f3f7e1,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 15,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c2c96ea3-032f-48fd-a1d2-3c939391f431 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                ATTEMPT             POD ID              POD
	e81a95eed4c06       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a   About a minute ago   Exited              kube-apiserver      15                  41174acb8746a       kube-apiserver-kubernetes-upgrade-024079
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.784846] systemd-fstab-generator[562]: Ignoring "noauto" option for root device
	[  +0.062142] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063657] systemd-fstab-generator[574]: Ignoring "noauto" option for root device
	[  +0.162042] systemd-fstab-generator[588]: Ignoring "noauto" option for root device
	[  +0.122208] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.262753] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +3.954350] systemd-fstab-generator[720]: Ignoring "noauto" option for root device
	[  +1.880976] systemd-fstab-generator[842]: Ignoring "noauto" option for root device
	[  +0.060614] kauditd_printk_skb: 158 callbacks suppressed
	[ +12.153907] systemd-fstab-generator[1258]: Ignoring "noauto" option for root device
	[  +0.073143] kauditd_printk_skb: 69 callbacks suppressed
	[Feb 5 03:15] systemd-fstab-generator[2056]: Ignoring "noauto" option for root device
	[  +0.192576] systemd-fstab-generator[2095]: Ignoring "noauto" option for root device
	[  +0.210613] systemd-fstab-generator[2109]: Ignoring "noauto" option for root device
	[  +0.155324] systemd-fstab-generator[2121]: Ignoring "noauto" option for root device
	[  +0.294478] systemd-fstab-generator[2166]: Ignoring "noauto" option for root device
	[Feb 5 03:16] systemd-fstab-generator[2342]: Ignoring "noauto" option for root device
	[  +0.088553] kauditd_printk_skb: 218 callbacks suppressed
	[  +2.499966] systemd-fstab-generator[2465]: Ignoring "noauto" option for root device
	[ +22.614101] kauditd_printk_skb: 63 callbacks suppressed
	[Feb 5 03:20] systemd-fstab-generator[7081]: Ignoring "noauto" option for root device
	[Feb 5 03:21] kauditd_printk_skb: 50 callbacks suppressed
	[Feb 5 03:24] systemd-fstab-generator[7701]: Ignoring "noauto" option for root device
	[Feb 5 03:25] kauditd_printk_skb: 42 callbacks suppressed
	
	
	==> kernel <==
	 03:28:49 up 14 min,  0 users,  load average: 0.24, 0.18, 0.11
	Linux kubernetes-upgrade-024079 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [e81a95eed4c06e397c20c04d396a9be6bc84e97be5b6e8c603fdc1df8c4a5a6d] <==
	I0205 03:27:43.345503       1 server.go:145] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0205 03:27:43.861760       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:27:43.862358       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0205 03:27:43.872619       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0205 03:27:43.878662       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0205 03:27:43.885867       1 plugins.go:157] Loaded 13 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0205 03:27:43.885950       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0205 03:27:43.886220       1 instance.go:233] Using reconciler: lease
	W0205 03:27:43.887332       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:27:44.862882       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:27:44.862942       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:27:44.888049       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:27:46.286953       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:27:46.330512       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:27:46.666329       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:27:48.389386       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:27:48.847837       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:27:49.275652       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:27:52.437981       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:27:53.185905       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:27:53.362863       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:27:58.683074       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:27:59.273901       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:27:59.378847       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0205 03:28:03.886945       1 instance.go:226] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kubelet <==
	Feb 05 03:28:38 kubernetes-upgrade-024079 kubelet[7708]: E0205 03:28:38.918249    7708 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-024079?timeout=10s\": dial tcp 192.168.61.227:8443: connect: connection refused" interval="7s"
	Feb 05 03:28:39 kubernetes-upgrade-024079 kubelet[7708]: E0205 03:28:39.174372    7708 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"kubernetes-upgrade-024079\" not found" node="kubernetes-upgrade-024079"
	Feb 05 03:28:39 kubernetes-upgrade-024079 kubelet[7708]: E0205 03:28:39.182474    7708 log.go:32] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_kube-controller-manager_kube-controller-manager-kubernetes-upgrade-024079_kube-system_349b2f22fd891bd28a8742fbc607d6cb_1\" is already in use by 514b0872afa9d05af09ee5be1d09f899aa3db700df15bba03727395f1d4c7b40. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="f5a764eacd45b952cf6d8f134002d8fccbfff46f38f66ea0c1152ee5eefff775"
	Feb 05 03:28:39 kubernetes-upgrade-024079 kubelet[7708]: E0205 03:28:39.182614    7708 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:kube-controller-manager,Image:registry.k8s.io/kube-controller-manager:v1.32.1,Command:[kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-cidr=10.244.0.0/16 --cluster-name=mk --cluster-signing-cert-file=/var/lib/minikube/certs/ca.crt --cluster-signing-key-file=/var/lib/minikube/certs/ca.key --controllers=*,bootstrapsigner,tokencleaner --kubeconfig=/etc/kubernetes/controller-manager.conf --leader-elect=false --requestheader-client-ca-file=/var/lib/minikube/certs/front-proxy-ca.crt --root-ca-file=/var/lib/minikube/certs/ca.crt --service-account-private-key-file=/var/lib/minikube/certs/sa.key --service-cluster-ip-range=10.96.0.
0/12 --use-service-account-credentials=true],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{200 -3} {<nil>} 200m DecimalSI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/ssl/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:flexvolume-dir,ReadOnly:false,MountPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:k8s-certs,ReadOnly:true,MountPath:/var/lib/minikube/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kubeconfig,ReadOnly:true,MountPath:/etc/kubernetes/controller-manager.conf,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:usr-share-ca-certificates,ReadOnly:true,MountPath:/usr/share/ca-certificates,SubPath:,MountPropagation:nil,SubPathExpr:,Recu
rsiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 10257 },Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 10257 },Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:24,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-controller-manager-ku
bernetes-upgrade-024079_kube-system(349b2f22fd891bd28a8742fbc607d6cb): CreateContainerError: the container name \"k8s_kube-controller-manager_kube-controller-manager-kubernetes-upgrade-024079_kube-system_349b2f22fd891bd28a8742fbc607d6cb_1\" is already in use by 514b0872afa9d05af09ee5be1d09f899aa3db700df15bba03727395f1d4c7b40. You have to remove that container to be able to reuse that name: that name is already in use" logger="UnhandledError"
	Feb 05 03:28:39 kubernetes-upgrade-024079 kubelet[7708]: E0205 03:28:39.183860    7708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"the container name \\\"k8s_kube-controller-manager_kube-controller-manager-kubernetes-upgrade-024079_kube-system_349b2f22fd891bd28a8742fbc607d6cb_1\\\" is already in use by 514b0872afa9d05af09ee5be1d09f899aa3db700df15bba03727395f1d4c7b40. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/kube-controller-manager-kubernetes-upgrade-024079" podUID="349b2f22fd891bd28a8742fbc607d6cb"
	Feb 05 03:28:41 kubernetes-upgrade-024079 kubelet[7708]: E0205 03:28:41.778800    7708 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.61.227:8443: connect: connection refused" event="&Event{ObjectMeta:{kubernetes-upgrade-024079.182131fbb0220386  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:kubernetes-upgrade-024079,UID:kubernetes-upgrade-024079,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node kubernetes-upgrade-024079 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:kubernetes-upgrade-024079,},FirstTimestamp:2025-02-05 03:24:48.203293574 +0000 UTC m=+0.494665881,LastTimestamp:2025-02-05 03:24:48.203293574 +0000 UTC m=+0.494665881,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,
ReportingController:kubelet,ReportingInstance:kubernetes-upgrade-024079,}"
	Feb 05 03:28:42 kubernetes-upgrade-024079 kubelet[7708]: E0205 03:28:42.175367    7708 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"kubernetes-upgrade-024079\" not found" node="kubernetes-upgrade-024079"
	Feb 05 03:28:42 kubernetes-upgrade-024079 kubelet[7708]: E0205 03:28:42.175608    7708 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"kubernetes-upgrade-024079\" not found" node="kubernetes-upgrade-024079"
	Feb 05 03:28:42 kubernetes-upgrade-024079 kubelet[7708]: I0205 03:28:42.176286    7708 scope.go:117] "RemoveContainer" containerID="e81a95eed4c06e397c20c04d396a9be6bc84e97be5b6e8c603fdc1df8c4a5a6d"
	Feb 05 03:28:42 kubernetes-upgrade-024079 kubelet[7708]: E0205 03:28:42.176644    7708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-kubernetes-upgrade-024079_kube-system(1854e96712f908395a65b47710f3f7e1)\"" pod="kube-system/kube-apiserver-kubernetes-upgrade-024079" podUID="1854e96712f908395a65b47710f3f7e1"
	Feb 05 03:28:42 kubernetes-upgrade-024079 kubelet[7708]: E0205 03:28:42.187386    7708 log.go:32] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_etcd_etcd-kubernetes-upgrade-024079_kube-system_cb629e3a3cbe8b3ecb9721b9079748f5_1\" is already in use by cffa39145e2ef8f22f3fdf9f0a738e572af8e6be16ca4535405276e88e0e735b. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="99103524917d54fba4ec0e262617bb820dbc764783aa7c8cb97da0c0805c25aa"
	Feb 05 03:28:42 kubernetes-upgrade-024079 kubelet[7708]: E0205 03:28:42.187593    7708 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:etcd,Image:registry.k8s.io/etcd:3.5.16-0,Command:[etcd --advertise-client-urls=https://192.168.61.227:2379 --cert-file=/var/lib/minikube/certs/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/minikube/etcd --experimental-initial-corrupt-check=true --experimental-watch-progress-notify-interval=5s --initial-advertise-peer-urls=https://192.168.61.227:2380 --initial-cluster=kubernetes-upgrade-024079=https://192.168.61.227:2380 --key-file=/var/lib/minikube/certs/etcd/server.key --listen-client-urls=https://127.0.0.1:2379,https://192.168.61.227:2379 --listen-metrics-urls=http://127.0.0.1:2381 --listen-peer-urls=https://192.168.61.227:2380 --name=kubernetes-upgrade-024079 --peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/var/lib/minikube/certs/etcd/peer.key --peer-trusted-ca-file=/var
/lib/minikube/certs/etcd/ca.crt --proxy-refresh-interval=70000 --snapshot-count=10000 --trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{104857600 0} {<nil>} 100Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etcd-data,ReadOnly:false,MountPath:/var/lib/minikube/etcd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etcd-certs,ReadOnly:false,MountPath:/var/lib/minikube/certs/etcd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 2381 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,TerminationGracePeriodSeconds:n
il,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 2381 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:15,PeriodSeconds:1,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 2381 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:24,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod etcd-kubernetes-upgrade-024079_kube-system(cb629e3a3cbe8b3ecb9721b907
9748f5): CreateContainerError: the container name \"k8s_etcd_etcd-kubernetes-upgrade-024079_kube-system_cb629e3a3cbe8b3ecb9721b9079748f5_1\" is already in use by cffa39145e2ef8f22f3fdf9f0a738e572af8e6be16ca4535405276e88e0e735b. You have to remove that container to be able to reuse that name: that name is already in use" logger="UnhandledError"
	Feb 05 03:28:42 kubernetes-upgrade-024079 kubelet[7708]: E0205 03:28:42.188805    7708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"the container name \\\"k8s_etcd_etcd-kubernetes-upgrade-024079_kube-system_cb629e3a3cbe8b3ecb9721b9079748f5_1\\\" is already in use by cffa39145e2ef8f22f3fdf9f0a738e572af8e6be16ca4535405276e88e0e735b. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/etcd-kubernetes-upgrade-024079" podUID="cb629e3a3cbe8b3ecb9721b9079748f5"
	Feb 05 03:28:45 kubernetes-upgrade-024079 kubelet[7708]: I0205 03:28:45.909139    7708 kubelet_node_status.go:76] "Attempting to register node" node="kubernetes-upgrade-024079"
	Feb 05 03:28:45 kubernetes-upgrade-024079 kubelet[7708]: E0205 03:28:45.910115    7708 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.227:8443: connect: connection refused" node="kubernetes-upgrade-024079"
	Feb 05 03:28:45 kubernetes-upgrade-024079 kubelet[7708]: E0205 03:28:45.920005    7708 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-024079?timeout=10s\": dial tcp 192.168.61.227:8443: connect: connection refused" interval="7s"
	Feb 05 03:28:48 kubernetes-upgrade-024079 kubelet[7708]: E0205 03:28:48.191602    7708 iptables.go:577] "Could not set up iptables canary" err=<
	Feb 05 03:28:48 kubernetes-upgrade-024079 kubelet[7708]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 05 03:28:48 kubernetes-upgrade-024079 kubelet[7708]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 05 03:28:48 kubernetes-upgrade-024079 kubelet[7708]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 05 03:28:48 kubernetes-upgrade-024079 kubelet[7708]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 05 03:28:48 kubernetes-upgrade-024079 kubelet[7708]: E0205 03:28:48.273321    7708 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738726128273034913,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 05 03:28:48 kubernetes-upgrade-024079 kubelet[7708]: E0205 03:28:48.273365    7708 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738726128273034913,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 05 03:28:49 kubernetes-upgrade-024079 kubelet[7708]: W0205 03:28:49.890381    7708 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.61.227:8443: connect: connection refused
	Feb 05 03:28:49 kubernetes-upgrade-024079 kubelet[7708]: E0205 03:28:49.890516    7708 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.61.227:8443: connect: connection refused" logger="UnhandledError"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-024079 -n kubernetes-upgrade-024079
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-024079 -n kubernetes-upgrade-024079: exit status 2 (232.963122ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "kubernetes-upgrade-024079" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-024079" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-024079
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-024079: (1.129985669s)
--- FAIL: TestKubernetesUpgrade (1182.08s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (61.02s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-922984 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-922984 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (57.29093987s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-922984] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20363
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20363-12788/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20363-12788/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-922984" primary control-plane node in "pause-922984" cluster
	* Updating the running kvm2 "pause-922984" VM ...
	* Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-922984" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0205 03:11:29.146479   60573 out.go:345] Setting OutFile to fd 1 ...
	I0205 03:11:29.146627   60573 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 03:11:29.146640   60573 out.go:358] Setting ErrFile to fd 2...
	I0205 03:11:29.146647   60573 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 03:11:29.146820   60573 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20363-12788/.minikube/bin
	I0205 03:11:29.147345   60573 out.go:352] Setting JSON to false
	I0205 03:11:29.148307   60573 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6840,"bootTime":1738718249,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0205 03:11:29.148404   60573 start.go:139] virtualization: kvm guest
	I0205 03:11:29.150558   60573 out.go:177] * [pause-922984] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0205 03:11:29.152163   60573 out.go:177]   - MINIKUBE_LOCATION=20363
	I0205 03:11:29.152184   60573 notify.go:220] Checking for updates...
	I0205 03:11:29.154618   60573 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0205 03:11:29.155972   60573 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20363-12788/kubeconfig
	I0205 03:11:29.157252   60573 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20363-12788/.minikube
	I0205 03:11:29.158447   60573 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0205 03:11:29.159647   60573 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0205 03:11:29.161475   60573 config.go:182] Loaded profile config "pause-922984": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 03:11:29.161862   60573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:11:29.161937   60573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:11:29.177722   60573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39825
	I0205 03:11:29.178211   60573 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:11:29.178753   60573 main.go:141] libmachine: Using API Version  1
	I0205 03:11:29.178773   60573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:11:29.179177   60573 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:11:29.179406   60573 main.go:141] libmachine: (pause-922984) Calling .DriverName
	I0205 03:11:29.179690   60573 driver.go:394] Setting default libvirt URI to qemu:///system
	I0205 03:11:29.180118   60573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:11:29.180186   60573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:11:29.194803   60573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40323
	I0205 03:11:29.195193   60573 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:11:29.195727   60573 main.go:141] libmachine: Using API Version  1
	I0205 03:11:29.195748   60573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:11:29.196037   60573 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:11:29.196244   60573 main.go:141] libmachine: (pause-922984) Calling .DriverName
	I0205 03:11:29.234482   60573 out.go:177] * Using the kvm2 driver based on existing profile
	I0205 03:11:29.235837   60573 start.go:297] selected driver: kvm2
	I0205 03:11:29.235855   60573 start.go:901] validating driver "kvm2" against &{Name:pause-922984 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:pause-922984 Namespace:def
ault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.73 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-polic
y:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0205 03:11:29.236032   60573 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0205 03:11:29.236369   60573 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0205 03:11:29.236443   60573 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20363-12788/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0205 03:11:29.251834   60573 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0205 03:11:29.252561   60573 cni.go:84] Creating CNI manager for ""
	I0205 03:11:29.252617   60573 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0205 03:11:29.252691   60573 start.go:340] cluster config:
	{Name:pause-922984 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:pause-922984 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.73 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:f
alse storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0205 03:11:29.252816   60573 iso.go:125] acquiring lock: {Name:mk486603e8d6546d81b9e7d0a893261360630790 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0205 03:11:29.254660   60573 out.go:177] * Starting "pause-922984" primary control-plane node in "pause-922984" cluster
	I0205 03:11:29.256179   60573 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0205 03:11:29.256228   60573 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20363-12788/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0205 03:11:29.256239   60573 cache.go:56] Caching tarball of preloaded images
	I0205 03:11:29.256312   60573 preload.go:172] Found /home/jenkins/minikube-integration/20363-12788/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0205 03:11:29.256323   60573 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0205 03:11:29.256452   60573 profile.go:143] Saving config to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/pause-922984/config.json ...
	I0205 03:11:29.256653   60573 start.go:360] acquireMachinesLock for pause-922984: {Name:mka859d8706e94e04a549fcebb98cdac86bfe5a9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0205 03:11:29.256700   60573 start.go:364] duration metric: took 26.698µs to acquireMachinesLock for "pause-922984"
	I0205 03:11:29.256719   60573 start.go:96] Skipping create...Using existing machine configuration
	I0205 03:11:29.256727   60573 fix.go:54] fixHost starting: 
	I0205 03:11:29.256978   60573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:11:29.257014   60573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:11:29.271471   60573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39839
	I0205 03:11:29.271896   60573 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:11:29.272339   60573 main.go:141] libmachine: Using API Version  1
	I0205 03:11:29.272361   60573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:11:29.272755   60573 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:11:29.272958   60573 main.go:141] libmachine: (pause-922984) Calling .DriverName
	I0205 03:11:29.273084   60573 main.go:141] libmachine: (pause-922984) Calling .GetState
	I0205 03:11:29.274830   60573 fix.go:112] recreateIfNeeded on pause-922984: state=Running err=<nil>
	W0205 03:11:29.274850   60573 fix.go:138] unexpected machine state, will restart: <nil>
	I0205 03:11:29.276910   60573 out.go:177] * Updating the running kvm2 "pause-922984" VM ...
	I0205 03:11:29.278339   60573 machine.go:93] provisionDockerMachine start ...
	I0205 03:11:29.278369   60573 main.go:141] libmachine: (pause-922984) Calling .DriverName
	I0205 03:11:29.278597   60573 main.go:141] libmachine: (pause-922984) Calling .GetSSHHostname
	I0205 03:11:29.281105   60573 main.go:141] libmachine: (pause-922984) DBG | domain pause-922984 has defined MAC address 52:54:00:50:29:e2 in network mk-pause-922984
	I0205 03:11:29.281573   60573 main.go:141] libmachine: (pause-922984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:29:e2", ip: ""} in network mk-pause-922984: {Iface:virbr4 ExpiryTime:2025-02-05 04:10:44 +0000 UTC Type:0 Mac:52:54:00:50:29:e2 Iaid: IPaddr:192.168.50.73 Prefix:24 Hostname:pause-922984 Clientid:01:52:54:00:50:29:e2}
	I0205 03:11:29.281633   60573 main.go:141] libmachine: (pause-922984) DBG | domain pause-922984 has defined IP address 192.168.50.73 and MAC address 52:54:00:50:29:e2 in network mk-pause-922984
	I0205 03:11:29.281725   60573 main.go:141] libmachine: (pause-922984) Calling .GetSSHPort
	I0205 03:11:29.281873   60573 main.go:141] libmachine: (pause-922984) Calling .GetSSHKeyPath
	I0205 03:11:29.281993   60573 main.go:141] libmachine: (pause-922984) Calling .GetSSHKeyPath
	I0205 03:11:29.282223   60573 main.go:141] libmachine: (pause-922984) Calling .GetSSHUsername
	I0205 03:11:29.282387   60573 main.go:141] libmachine: Using SSH client type: native
	I0205 03:11:29.282586   60573 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.50.73 22 <nil> <nil>}
	I0205 03:11:29.282598   60573 main.go:141] libmachine: About to run SSH command:
	hostname
	I0205 03:11:29.390643   60573 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-922984
	
	I0205 03:11:29.390700   60573 main.go:141] libmachine: (pause-922984) Calling .GetMachineName
	I0205 03:11:29.390943   60573 buildroot.go:166] provisioning hostname "pause-922984"
	I0205 03:11:29.390976   60573 main.go:141] libmachine: (pause-922984) Calling .GetMachineName
	I0205 03:11:29.391164   60573 main.go:141] libmachine: (pause-922984) Calling .GetSSHHostname
	I0205 03:11:29.394026   60573 main.go:141] libmachine: (pause-922984) DBG | domain pause-922984 has defined MAC address 52:54:00:50:29:e2 in network mk-pause-922984
	I0205 03:11:29.394447   60573 main.go:141] libmachine: (pause-922984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:29:e2", ip: ""} in network mk-pause-922984: {Iface:virbr4 ExpiryTime:2025-02-05 04:10:44 +0000 UTC Type:0 Mac:52:54:00:50:29:e2 Iaid: IPaddr:192.168.50.73 Prefix:24 Hostname:pause-922984 Clientid:01:52:54:00:50:29:e2}
	I0205 03:11:29.394479   60573 main.go:141] libmachine: (pause-922984) DBG | domain pause-922984 has defined IP address 192.168.50.73 and MAC address 52:54:00:50:29:e2 in network mk-pause-922984
	I0205 03:11:29.394682   60573 main.go:141] libmachine: (pause-922984) Calling .GetSSHPort
	I0205 03:11:29.394875   60573 main.go:141] libmachine: (pause-922984) Calling .GetSSHKeyPath
	I0205 03:11:29.395048   60573 main.go:141] libmachine: (pause-922984) Calling .GetSSHKeyPath
	I0205 03:11:29.395201   60573 main.go:141] libmachine: (pause-922984) Calling .GetSSHUsername
	I0205 03:11:29.395344   60573 main.go:141] libmachine: Using SSH client type: native
	I0205 03:11:29.395524   60573 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.50.73 22 <nil> <nil>}
	I0205 03:11:29.395535   60573 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-922984 && echo "pause-922984" | sudo tee /etc/hostname
	I0205 03:11:29.529171   60573 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-922984
	
	I0205 03:11:29.529207   60573 main.go:141] libmachine: (pause-922984) Calling .GetSSHHostname
	I0205 03:11:29.532063   60573 main.go:141] libmachine: (pause-922984) DBG | domain pause-922984 has defined MAC address 52:54:00:50:29:e2 in network mk-pause-922984
	I0205 03:11:29.532365   60573 main.go:141] libmachine: (pause-922984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:29:e2", ip: ""} in network mk-pause-922984: {Iface:virbr4 ExpiryTime:2025-02-05 04:10:44 +0000 UTC Type:0 Mac:52:54:00:50:29:e2 Iaid: IPaddr:192.168.50.73 Prefix:24 Hostname:pause-922984 Clientid:01:52:54:00:50:29:e2}
	I0205 03:11:29.532388   60573 main.go:141] libmachine: (pause-922984) DBG | domain pause-922984 has defined IP address 192.168.50.73 and MAC address 52:54:00:50:29:e2 in network mk-pause-922984
	I0205 03:11:29.532623   60573 main.go:141] libmachine: (pause-922984) Calling .GetSSHPort
	I0205 03:11:29.532812   60573 main.go:141] libmachine: (pause-922984) Calling .GetSSHKeyPath
	I0205 03:11:29.532975   60573 main.go:141] libmachine: (pause-922984) Calling .GetSSHKeyPath
	I0205 03:11:29.533077   60573 main.go:141] libmachine: (pause-922984) Calling .GetSSHUsername
	I0205 03:11:29.533253   60573 main.go:141] libmachine: Using SSH client type: native
	I0205 03:11:29.533457   60573 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.50.73 22 <nil> <nil>}
	I0205 03:11:29.533489   60573 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-922984' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-922984/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-922984' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0205 03:11:29.638005   60573 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0205 03:11:29.638040   60573 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20363-12788/.minikube CaCertPath:/home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20363-12788/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20363-12788/.minikube}
	I0205 03:11:29.638076   60573 buildroot.go:174] setting up certificates
	I0205 03:11:29.638091   60573 provision.go:84] configureAuth start
	I0205 03:11:29.638107   60573 main.go:141] libmachine: (pause-922984) Calling .GetMachineName
	I0205 03:11:29.638420   60573 main.go:141] libmachine: (pause-922984) Calling .GetIP
	I0205 03:11:29.641422   60573 main.go:141] libmachine: (pause-922984) DBG | domain pause-922984 has defined MAC address 52:54:00:50:29:e2 in network mk-pause-922984
	I0205 03:11:29.641882   60573 main.go:141] libmachine: (pause-922984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:29:e2", ip: ""} in network mk-pause-922984: {Iface:virbr4 ExpiryTime:2025-02-05 04:10:44 +0000 UTC Type:0 Mac:52:54:00:50:29:e2 Iaid: IPaddr:192.168.50.73 Prefix:24 Hostname:pause-922984 Clientid:01:52:54:00:50:29:e2}
	I0205 03:11:29.641909   60573 main.go:141] libmachine: (pause-922984) DBG | domain pause-922984 has defined IP address 192.168.50.73 and MAC address 52:54:00:50:29:e2 in network mk-pause-922984
	I0205 03:11:29.642154   60573 main.go:141] libmachine: (pause-922984) Calling .GetSSHHostname
	I0205 03:11:29.644809   60573 main.go:141] libmachine: (pause-922984) DBG | domain pause-922984 has defined MAC address 52:54:00:50:29:e2 in network mk-pause-922984
	I0205 03:11:29.645241   60573 main.go:141] libmachine: (pause-922984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:29:e2", ip: ""} in network mk-pause-922984: {Iface:virbr4 ExpiryTime:2025-02-05 04:10:44 +0000 UTC Type:0 Mac:52:54:00:50:29:e2 Iaid: IPaddr:192.168.50.73 Prefix:24 Hostname:pause-922984 Clientid:01:52:54:00:50:29:e2}
	I0205 03:11:29.645267   60573 main.go:141] libmachine: (pause-922984) DBG | domain pause-922984 has defined IP address 192.168.50.73 and MAC address 52:54:00:50:29:e2 in network mk-pause-922984
	I0205 03:11:29.645393   60573 provision.go:143] copyHostCerts
	I0205 03:11:29.645455   60573 exec_runner.go:144] found /home/jenkins/minikube-integration/20363-12788/.minikube/ca.pem, removing ...
	I0205 03:11:29.645467   60573 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20363-12788/.minikube/ca.pem
	I0205 03:11:29.645521   60573 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20363-12788/.minikube/ca.pem (1082 bytes)
	I0205 03:11:29.645602   60573 exec_runner.go:144] found /home/jenkins/minikube-integration/20363-12788/.minikube/cert.pem, removing ...
	I0205 03:11:29.645610   60573 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20363-12788/.minikube/cert.pem
	I0205 03:11:29.645629   60573 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20363-12788/.minikube/cert.pem (1123 bytes)
	I0205 03:11:29.645678   60573 exec_runner.go:144] found /home/jenkins/minikube-integration/20363-12788/.minikube/key.pem, removing ...
	I0205 03:11:29.645685   60573 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20363-12788/.minikube/key.pem
	I0205 03:11:29.645701   60573 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20363-12788/.minikube/key.pem (1675 bytes)
	I0205 03:11:29.645743   60573 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20363-12788/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca-key.pem org=jenkins.pause-922984 san=[127.0.0.1 192.168.50.73 localhost minikube pause-922984]
	I0205 03:11:29.790754   60573 provision.go:177] copyRemoteCerts
	I0205 03:11:29.790806   60573 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0205 03:11:29.790828   60573 main.go:141] libmachine: (pause-922984) Calling .GetSSHHostname
	I0205 03:11:29.793861   60573 main.go:141] libmachine: (pause-922984) DBG | domain pause-922984 has defined MAC address 52:54:00:50:29:e2 in network mk-pause-922984
	I0205 03:11:29.794226   60573 main.go:141] libmachine: (pause-922984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:29:e2", ip: ""} in network mk-pause-922984: {Iface:virbr4 ExpiryTime:2025-02-05 04:10:44 +0000 UTC Type:0 Mac:52:54:00:50:29:e2 Iaid: IPaddr:192.168.50.73 Prefix:24 Hostname:pause-922984 Clientid:01:52:54:00:50:29:e2}
	I0205 03:11:29.794259   60573 main.go:141] libmachine: (pause-922984) DBG | domain pause-922984 has defined IP address 192.168.50.73 and MAC address 52:54:00:50:29:e2 in network mk-pause-922984
	I0205 03:11:29.794464   60573 main.go:141] libmachine: (pause-922984) Calling .GetSSHPort
	I0205 03:11:29.794645   60573 main.go:141] libmachine: (pause-922984) Calling .GetSSHKeyPath
	I0205 03:11:29.794845   60573 main.go:141] libmachine: (pause-922984) Calling .GetSSHUsername
	I0205 03:11:29.794984   60573 sshutil.go:53] new ssh client: &{IP:192.168.50.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/pause-922984/id_rsa Username:docker}
	I0205 03:11:29.880110   60573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0205 03:11:29.910212   60573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0205 03:11:29.936627   60573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0205 03:11:29.967262   60573 provision.go:87] duration metric: took 329.154855ms to configureAuth
	I0205 03:11:29.967295   60573 buildroot.go:189] setting minikube options for container-runtime
	I0205 03:11:29.967516   60573 config.go:182] Loaded profile config "pause-922984": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 03:11:29.967592   60573 main.go:141] libmachine: (pause-922984) Calling .GetSSHHostname
	I0205 03:11:29.970527   60573 main.go:141] libmachine: (pause-922984) DBG | domain pause-922984 has defined MAC address 52:54:00:50:29:e2 in network mk-pause-922984
	I0205 03:11:29.970930   60573 main.go:141] libmachine: (pause-922984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:29:e2", ip: ""} in network mk-pause-922984: {Iface:virbr4 ExpiryTime:2025-02-05 04:10:44 +0000 UTC Type:0 Mac:52:54:00:50:29:e2 Iaid: IPaddr:192.168.50.73 Prefix:24 Hostname:pause-922984 Clientid:01:52:54:00:50:29:e2}
	I0205 03:11:29.970961   60573 main.go:141] libmachine: (pause-922984) DBG | domain pause-922984 has defined IP address 192.168.50.73 and MAC address 52:54:00:50:29:e2 in network mk-pause-922984
	I0205 03:11:29.971215   60573 main.go:141] libmachine: (pause-922984) Calling .GetSSHPort
	I0205 03:11:29.971458   60573 main.go:141] libmachine: (pause-922984) Calling .GetSSHKeyPath
	I0205 03:11:29.971652   60573 main.go:141] libmachine: (pause-922984) Calling .GetSSHKeyPath
	I0205 03:11:29.971810   60573 main.go:141] libmachine: (pause-922984) Calling .GetSSHUsername
	I0205 03:11:29.972000   60573 main.go:141] libmachine: Using SSH client type: native
	I0205 03:11:29.972222   60573 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.50.73 22 <nil> <nil>}
	I0205 03:11:29.972245   60573 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0205 03:11:35.455959   60573 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0205 03:11:35.455983   60573 machine.go:96] duration metric: took 6.177627238s to provisionDockerMachine
	I0205 03:11:35.455994   60573 start.go:293] postStartSetup for "pause-922984" (driver="kvm2")
	I0205 03:11:35.456003   60573 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0205 03:11:35.456023   60573 main.go:141] libmachine: (pause-922984) Calling .DriverName
	I0205 03:11:35.456496   60573 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0205 03:11:35.456522   60573 main.go:141] libmachine: (pause-922984) Calling .GetSSHHostname
	I0205 03:11:35.459567   60573 main.go:141] libmachine: (pause-922984) DBG | domain pause-922984 has defined MAC address 52:54:00:50:29:e2 in network mk-pause-922984
	I0205 03:11:35.460027   60573 main.go:141] libmachine: (pause-922984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:29:e2", ip: ""} in network mk-pause-922984: {Iface:virbr4 ExpiryTime:2025-02-05 04:10:44 +0000 UTC Type:0 Mac:52:54:00:50:29:e2 Iaid: IPaddr:192.168.50.73 Prefix:24 Hostname:pause-922984 Clientid:01:52:54:00:50:29:e2}
	I0205 03:11:35.460073   60573 main.go:141] libmachine: (pause-922984) DBG | domain pause-922984 has defined IP address 192.168.50.73 and MAC address 52:54:00:50:29:e2 in network mk-pause-922984
	I0205 03:11:35.460292   60573 main.go:141] libmachine: (pause-922984) Calling .GetSSHPort
	I0205 03:11:35.460498   60573 main.go:141] libmachine: (pause-922984) Calling .GetSSHKeyPath
	I0205 03:11:35.460656   60573 main.go:141] libmachine: (pause-922984) Calling .GetSSHUsername
	I0205 03:11:35.460791   60573 sshutil.go:53] new ssh client: &{IP:192.168.50.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/pause-922984/id_rsa Username:docker}
	I0205 03:11:35.614349   60573 ssh_runner.go:195] Run: cat /etc/os-release
	I0205 03:11:35.643435   60573 info.go:137] Remote host: Buildroot 2023.02.9
	I0205 03:11:35.643469   60573 filesync.go:126] Scanning /home/jenkins/minikube-integration/20363-12788/.minikube/addons for local assets ...
	I0205 03:11:35.643545   60573 filesync.go:126] Scanning /home/jenkins/minikube-integration/20363-12788/.minikube/files for local assets ...
	I0205 03:11:35.643688   60573 filesync.go:149] local asset: /home/jenkins/minikube-integration/20363-12788/.minikube/files/etc/ssl/certs/199892.pem -> 199892.pem in /etc/ssl/certs
	I0205 03:11:35.643809   60573 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0205 03:11:35.710464   60573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/files/etc/ssl/certs/199892.pem --> /etc/ssl/certs/199892.pem (1708 bytes)
	I0205 03:11:35.802434   60573 start.go:296] duration metric: took 346.42533ms for postStartSetup
	I0205 03:11:35.802489   60573 fix.go:56] duration metric: took 6.545760065s for fixHost
	I0205 03:11:35.802517   60573 main.go:141] libmachine: (pause-922984) Calling .GetSSHHostname
	I0205 03:11:35.805919   60573 main.go:141] libmachine: (pause-922984) DBG | domain pause-922984 has defined MAC address 52:54:00:50:29:e2 in network mk-pause-922984
	I0205 03:11:35.806456   60573 main.go:141] libmachine: (pause-922984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:29:e2", ip: ""} in network mk-pause-922984: {Iface:virbr4 ExpiryTime:2025-02-05 04:10:44 +0000 UTC Type:0 Mac:52:54:00:50:29:e2 Iaid: IPaddr:192.168.50.73 Prefix:24 Hostname:pause-922984 Clientid:01:52:54:00:50:29:e2}
	I0205 03:11:35.806491   60573 main.go:141] libmachine: (pause-922984) DBG | domain pause-922984 has defined IP address 192.168.50.73 and MAC address 52:54:00:50:29:e2 in network mk-pause-922984
	I0205 03:11:35.806824   60573 main.go:141] libmachine: (pause-922984) Calling .GetSSHPort
	I0205 03:11:35.807056   60573 main.go:141] libmachine: (pause-922984) Calling .GetSSHKeyPath
	I0205 03:11:35.807256   60573 main.go:141] libmachine: (pause-922984) Calling .GetSSHKeyPath
	I0205 03:11:35.807430   60573 main.go:141] libmachine: (pause-922984) Calling .GetSSHUsername
	I0205 03:11:35.807641   60573 main.go:141] libmachine: Using SSH client type: native
	I0205 03:11:35.807880   60573 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.50.73 22 <nil> <nil>}
	I0205 03:11:35.807898   60573 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0205 03:11:36.159204   60573 main.go:141] libmachine: SSH cmd err, output: <nil>: 1738725096.118146438
	
	I0205 03:11:36.159265   60573 fix.go:216] guest clock: 1738725096.118146438
	I0205 03:11:36.159275   60573 fix.go:229] Guest: 2025-02-05 03:11:36.118146438 +0000 UTC Remote: 2025-02-05 03:11:35.802494704 +0000 UTC m=+6.696759187 (delta=315.651734ms)
	I0205 03:11:36.159324   60573 fix.go:200] guest clock delta is within tolerance: 315.651734ms
	I0205 03:11:36.159336   60573 start.go:83] releasing machines lock for "pause-922984", held for 6.902623587s
	I0205 03:11:36.159368   60573 main.go:141] libmachine: (pause-922984) Calling .DriverName
	I0205 03:11:36.159646   60573 main.go:141] libmachine: (pause-922984) Calling .GetIP
	I0205 03:11:36.162304   60573 main.go:141] libmachine: (pause-922984) DBG | domain pause-922984 has defined MAC address 52:54:00:50:29:e2 in network mk-pause-922984
	I0205 03:11:36.162757   60573 main.go:141] libmachine: (pause-922984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:29:e2", ip: ""} in network mk-pause-922984: {Iface:virbr4 ExpiryTime:2025-02-05 04:10:44 +0000 UTC Type:0 Mac:52:54:00:50:29:e2 Iaid: IPaddr:192.168.50.73 Prefix:24 Hostname:pause-922984 Clientid:01:52:54:00:50:29:e2}
	I0205 03:11:36.162787   60573 main.go:141] libmachine: (pause-922984) DBG | domain pause-922984 has defined IP address 192.168.50.73 and MAC address 52:54:00:50:29:e2 in network mk-pause-922984
	I0205 03:11:36.162976   60573 main.go:141] libmachine: (pause-922984) Calling .DriverName
	I0205 03:11:36.163479   60573 main.go:141] libmachine: (pause-922984) Calling .DriverName
	I0205 03:11:36.163649   60573 main.go:141] libmachine: (pause-922984) Calling .DriverName
	I0205 03:11:36.163751   60573 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0205 03:11:36.163801   60573 main.go:141] libmachine: (pause-922984) Calling .GetSSHHostname
	I0205 03:11:36.164048   60573 ssh_runner.go:195] Run: cat /version.json
	I0205 03:11:36.164080   60573 main.go:141] libmachine: (pause-922984) Calling .GetSSHHostname
	I0205 03:11:36.166830   60573 main.go:141] libmachine: (pause-922984) DBG | domain pause-922984 has defined MAC address 52:54:00:50:29:e2 in network mk-pause-922984
	I0205 03:11:36.167034   60573 main.go:141] libmachine: (pause-922984) DBG | domain pause-922984 has defined MAC address 52:54:00:50:29:e2 in network mk-pause-922984
	I0205 03:11:36.167441   60573 main.go:141] libmachine: (pause-922984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:29:e2", ip: ""} in network mk-pause-922984: {Iface:virbr4 ExpiryTime:2025-02-05 04:10:44 +0000 UTC Type:0 Mac:52:54:00:50:29:e2 Iaid: IPaddr:192.168.50.73 Prefix:24 Hostname:pause-922984 Clientid:01:52:54:00:50:29:e2}
	I0205 03:11:36.167487   60573 main.go:141] libmachine: (pause-922984) DBG | domain pause-922984 has defined IP address 192.168.50.73 and MAC address 52:54:00:50:29:e2 in network mk-pause-922984
	I0205 03:11:36.167517   60573 main.go:141] libmachine: (pause-922984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:29:e2", ip: ""} in network mk-pause-922984: {Iface:virbr4 ExpiryTime:2025-02-05 04:10:44 +0000 UTC Type:0 Mac:52:54:00:50:29:e2 Iaid: IPaddr:192.168.50.73 Prefix:24 Hostname:pause-922984 Clientid:01:52:54:00:50:29:e2}
	I0205 03:11:36.167538   60573 main.go:141] libmachine: (pause-922984) DBG | domain pause-922984 has defined IP address 192.168.50.73 and MAC address 52:54:00:50:29:e2 in network mk-pause-922984
	I0205 03:11:36.167833   60573 main.go:141] libmachine: (pause-922984) Calling .GetSSHPort
	I0205 03:11:36.167911   60573 main.go:141] libmachine: (pause-922984) Calling .GetSSHPort
	I0205 03:11:36.168094   60573 main.go:141] libmachine: (pause-922984) Calling .GetSSHKeyPath
	I0205 03:11:36.168115   60573 main.go:141] libmachine: (pause-922984) Calling .GetSSHKeyPath
	I0205 03:11:36.168262   60573 main.go:141] libmachine: (pause-922984) Calling .GetSSHUsername
	I0205 03:11:36.168266   60573 main.go:141] libmachine: (pause-922984) Calling .GetSSHUsername
	I0205 03:11:36.168416   60573 sshutil.go:53] new ssh client: &{IP:192.168.50.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/pause-922984/id_rsa Username:docker}
	I0205 03:11:36.168422   60573 sshutil.go:53] new ssh client: &{IP:192.168.50.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/pause-922984/id_rsa Username:docker}
	I0205 03:11:36.314943   60573 ssh_runner.go:195] Run: systemctl --version
	I0205 03:11:36.345657   60573 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0205 03:11:36.629980   60573 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0205 03:11:36.663751   60573 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0205 03:11:36.663824   60573 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0205 03:11:36.680384   60573 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0205 03:11:36.680412   60573 start.go:495] detecting cgroup driver to use...
	I0205 03:11:36.680511   60573 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0205 03:11:36.707407   60573 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0205 03:11:36.726549   60573 docker.go:217] disabling cri-docker service (if available) ...
	I0205 03:11:36.726612   60573 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0205 03:11:36.746176   60573 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0205 03:11:36.763626   60573 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0205 03:11:36.994132   60573 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0205 03:11:37.201964   60573 docker.go:233] disabling docker service ...
	I0205 03:11:37.202054   60573 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0205 03:11:37.226660   60573 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0205 03:11:37.242672   60573 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0205 03:11:37.480665   60573 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0205 03:11:37.659547   60573 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0205 03:11:37.690930   60573 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0205 03:11:37.711309   60573 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0205 03:11:37.711386   60573 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:11:37.721948   60573 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0205 03:11:37.722040   60573 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:11:37.734891   60573 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:11:37.746717   60573 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:11:37.757369   60573 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0205 03:11:37.770854   60573 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:11:37.783717   60573 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:11:37.796066   60573 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:11:37.810248   60573 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0205 03:11:37.821412   60573 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0205 03:11:37.834641   60573 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0205 03:11:38.019496   60573 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0205 03:11:48.092406   60573 ssh_runner.go:235] Completed: sudo systemctl restart crio: (10.072867251s)
	I0205 03:11:48.092445   60573 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0205 03:11:48.092503   60573 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0205 03:11:48.097557   60573 start.go:563] Will wait 60s for crictl version
	I0205 03:11:48.097628   60573 ssh_runner.go:195] Run: which crictl
	I0205 03:11:48.101224   60573 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0205 03:11:48.139193   60573 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0205 03:11:48.139288   60573 ssh_runner.go:195] Run: crio --version
	I0205 03:11:48.168382   60573 ssh_runner.go:195] Run: crio --version
	I0205 03:11:48.198397   60573 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0205 03:11:48.199647   60573 main.go:141] libmachine: (pause-922984) Calling .GetIP
	I0205 03:11:48.203026   60573 main.go:141] libmachine: (pause-922984) DBG | domain pause-922984 has defined MAC address 52:54:00:50:29:e2 in network mk-pause-922984
	I0205 03:11:48.203451   60573 main.go:141] libmachine: (pause-922984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:29:e2", ip: ""} in network mk-pause-922984: {Iface:virbr4 ExpiryTime:2025-02-05 04:10:44 +0000 UTC Type:0 Mac:52:54:00:50:29:e2 Iaid: IPaddr:192.168.50.73 Prefix:24 Hostname:pause-922984 Clientid:01:52:54:00:50:29:e2}
	I0205 03:11:48.203486   60573 main.go:141] libmachine: (pause-922984) DBG | domain pause-922984 has defined IP address 192.168.50.73 and MAC address 52:54:00:50:29:e2 in network mk-pause-922984
	I0205 03:11:48.203708   60573 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0205 03:11:48.208502   60573 kubeadm.go:883] updating cluster {Name:pause-922984 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:pause-922984 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.73 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portain
er:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0205 03:11:48.208676   60573 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0205 03:11:48.208759   60573 ssh_runner.go:195] Run: sudo crictl images --output json
	I0205 03:11:48.257101   60573 crio.go:514] all images are preloaded for cri-o runtime.
	I0205 03:11:48.257129   60573 crio.go:433] Images already preloaded, skipping extraction
	I0205 03:11:48.257186   60573 ssh_runner.go:195] Run: sudo crictl images --output json
	I0205 03:11:48.300765   60573 crio.go:514] all images are preloaded for cri-o runtime.
	I0205 03:11:48.300787   60573 cache_images.go:84] Images are preloaded, skipping loading
	I0205 03:11:48.300795   60573 kubeadm.go:934] updating node { 192.168.50.73 8443 v1.32.1 crio true true} ...
	I0205 03:11:48.300898   60573 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-922984 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.73
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:pause-922984 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0205 03:11:48.300983   60573 ssh_runner.go:195] Run: crio config
	I0205 03:11:48.346709   60573 cni.go:84] Creating CNI manager for ""
	I0205 03:11:48.346743   60573 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0205 03:11:48.346754   60573 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0205 03:11:48.346783   60573 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.73 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-922984 NodeName:pause-922984 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.73"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.73 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0205 03:11:48.346960   60573 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.73
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-922984"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.73"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.73"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0205 03:11:48.347027   60573 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0205 03:11:48.357096   60573 binaries.go:44] Found k8s binaries, skipping transfer
	I0205 03:11:48.357175   60573 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0205 03:11:48.366865   60573 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0205 03:11:48.383659   60573 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0205 03:11:48.402631   60573 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I0205 03:11:48.421807   60573 ssh_runner.go:195] Run: grep 192.168.50.73	control-plane.minikube.internal$ /etc/hosts
	I0205 03:11:48.425718   60573 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0205 03:11:48.559595   60573 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0205 03:11:48.575100   60573 certs.go:68] Setting up /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/pause-922984 for IP: 192.168.50.73
	I0205 03:11:48.575126   60573 certs.go:194] generating shared ca certs ...
	I0205 03:11:48.575150   60573 certs.go:226] acquiring lock for ca certs: {Name:mkca3cbb562bd1b517883c3b07fce8acc6cc9038 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:11:48.575322   60573 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20363-12788/.minikube/ca.key
	I0205 03:11:48.575366   60573 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20363-12788/.minikube/proxy-client-ca.key
	I0205 03:11:48.575376   60573 certs.go:256] generating profile certs ...
	I0205 03:11:48.575503   60573 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/pause-922984/client.key
	I0205 03:11:48.575582   60573 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/pause-922984/apiserver.key.a11aece0
	I0205 03:11:48.575620   60573 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/pause-922984/proxy-client.key
	I0205 03:11:48.575722   60573 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/19989.pem (1338 bytes)
	W0205 03:11:48.575754   60573 certs.go:480] ignoring /home/jenkins/minikube-integration/20363-12788/.minikube/certs/19989_empty.pem, impossibly tiny 0 bytes
	I0205 03:11:48.575764   60573 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca-key.pem (1679 bytes)
	I0205 03:11:48.575786   60573 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem (1082 bytes)
	I0205 03:11:48.575811   60573 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/cert.pem (1123 bytes)
	I0205 03:11:48.575834   60573 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/key.pem (1675 bytes)
	I0205 03:11:48.575870   60573 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/files/etc/ssl/certs/199892.pem (1708 bytes)
	I0205 03:11:48.576521   60573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0205 03:11:48.600646   60573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0205 03:11:48.624770   60573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0205 03:11:48.649995   60573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0205 03:11:48.674268   60573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/pause-922984/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0205 03:11:48.736409   60573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/pause-922984/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0205 03:11:48.803366   60573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/pause-922984/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0205 03:11:48.919880   60573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/pause-922984/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0205 03:11:49.124671   60573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0205 03:11:49.316871   60573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/certs/19989.pem --> /usr/share/ca-certificates/19989.pem (1338 bytes)
	I0205 03:11:49.465842   60573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/files/etc/ssl/certs/199892.pem --> /usr/share/ca-certificates/199892.pem (1708 bytes)
	I0205 03:11:49.537431   60573 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0205 03:11:49.563703   60573 ssh_runner.go:195] Run: openssl version
	I0205 03:11:49.586707   60573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0205 03:11:49.625580   60573 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0205 03:11:49.642687   60573 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb  5 02:04 /usr/share/ca-certificates/minikubeCA.pem
	I0205 03:11:49.642763   60573 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0205 03:11:49.681002   60573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0205 03:11:49.729862   60573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19989.pem && ln -fs /usr/share/ca-certificates/19989.pem /etc/ssl/certs/19989.pem"
	I0205 03:11:49.774900   60573 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19989.pem
	I0205 03:11:49.781956   60573 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb  5 02:11 /usr/share/ca-certificates/19989.pem
	I0205 03:11:49.782023   60573 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19989.pem
	I0205 03:11:49.790465   60573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/19989.pem /etc/ssl/certs/51391683.0"
	I0205 03:11:49.811271   60573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/199892.pem && ln -fs /usr/share/ca-certificates/199892.pem /etc/ssl/certs/199892.pem"
	I0205 03:11:49.825821   60573 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/199892.pem
	I0205 03:11:49.833471   60573 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb  5 02:11 /usr/share/ca-certificates/199892.pem
	I0205 03:11:49.833524   60573 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/199892.pem
	I0205 03:11:49.850756   60573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/199892.pem /etc/ssl/certs/3ec20f2e.0"
	I0205 03:11:49.869242   60573 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0205 03:11:49.876157   60573 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0205 03:11:49.887475   60573 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0205 03:11:49.898178   60573 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0205 03:11:49.907420   60573 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0205 03:11:49.914382   60573 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0205 03:11:49.922198   60573 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0205 03:11:49.929742   60573 kubeadm.go:392] StartCluster: {Name:pause-922984 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:pause-922984 Namespace:default APIServerHAVI
P: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.73 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:
false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0205 03:11:49.929902   60573 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0205 03:11:49.929956   60573 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0205 03:11:50.003511   60573 cri.go:89] found id: "755358df556dce6f2e48e92d581a1094200b24c9bdc77d425911145cd466d35b"
	I0205 03:11:50.003540   60573 cri.go:89] found id: "ceebda47a58d2ce8cf7d15ecb18287cddd291719c8c903043598bc9d961b5d9d"
	I0205 03:11:50.003546   60573 cri.go:89] found id: "5f2bc549c414cff2ca0c6a6112bdd1714da74d43a9810061be4cb7dfa1177c9b"
	I0205 03:11:50.003551   60573 cri.go:89] found id: "58251f22ebcb311651a348e2e8b1c9eaddb5028a454a73ddd868d2d27f6dfdcd"
	I0205 03:11:50.003555   60573 cri.go:89] found id: "394185c489c99169a014f18ddb655f9d0e65f36644a3e0db6cd019393a37bb93"
	I0205 03:11:50.003559   60573 cri.go:89] found id: "0b1cdef4f58308907a18141c78517f42a3abcd7b6a56598d3ba5876cdb59c11e"
	I0205 03:11:50.003563   60573 cri.go:89] found id: "e848ba75c83d7caeeb32d6ee02b1041e93ad471afa3d2b2ba1720a77a8e3b9b0"
	I0205 03:11:50.003567   60573 cri.go:89] found id: "cf355ed731f75976d3018e2d31bfea15e6123fd9cf6c16c95547f2cf542b7758"
	I0205 03:11:50.003571   60573 cri.go:89] found id: "ca3b8dd24009597fef2f0d3510a7f2a5792a3fc95bd236d304ca853878722e1a"
	I0205 03:11:50.003580   60573 cri.go:89] found id: "0302ae8bf2d583b0b6ead48b2893f130b22929e26cb021d977cd564789d380c4"
	I0205 03:11:50.003584   60573 cri.go:89] found id: "f5fc983da1f6f6cbacd2b44871e8f6ef06ad1f3c3b8f81cbd690d209359cf2f5"
	I0205 03:11:50.003588   60573 cri.go:89] found id: ""
	I0205 03:11:50.003643   60573 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-922984 -n pause-922984
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-922984 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-922984 logs -n 25: (1.230145462s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p NoKubernetes-290619                | NoKubernetes-290619       | jenkins | v1.35.0 | 05 Feb 25 03:07 UTC | 05 Feb 25 03:07 UTC |
	| start   | -p NoKubernetes-290619                | NoKubernetes-290619       | jenkins | v1.35.0 | 05 Feb 25 03:07 UTC | 05 Feb 25 03:08 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-409141           | force-systemd-env-409141  | jenkins | v1.35.0 | 05 Feb 25 03:07 UTC | 05 Feb 25 03:07 UTC |
	| start   | -p cert-expiration-908105             | cert-expiration-908105    | jenkins | v1.35.0 | 05 Feb 25 03:07 UTC | 05 Feb 25 03:09 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-467430 ssh cat     | force-systemd-flag-467430 | jenkins | v1.35.0 | 05 Feb 25 03:08 UTC | 05 Feb 25 03:08 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-467430          | force-systemd-flag-467430 | jenkins | v1.35.0 | 05 Feb 25 03:08 UTC | 05 Feb 25 03:08 UTC |
	| start   | -p cert-options-653669                | cert-options-653669       | jenkins | v1.35.0 | 05 Feb 25 03:08 UTC | 05 Feb 25 03:09 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-290619 sudo           | NoKubernetes-290619       | jenkins | v1.35.0 | 05 Feb 25 03:08 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-290619                | NoKubernetes-290619       | jenkins | v1.35.0 | 05 Feb 25 03:09 UTC | 05 Feb 25 03:09 UTC |
	| start   | -p NoKubernetes-290619                | NoKubernetes-290619       | jenkins | v1.35.0 | 05 Feb 25 03:09 UTC | 05 Feb 25 03:09 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-292727             | running-upgrade-292727    | jenkins | v1.35.0 | 05 Feb 25 03:09 UTC | 05 Feb 25 03:09 UTC |
	| start   | -p kubernetes-upgrade-024079          | kubernetes-upgrade-024079 | jenkins | v1.35.0 | 05 Feb 25 03:09 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-653669 ssh               | cert-options-653669       | jenkins | v1.35.0 | 05 Feb 25 03:09 UTC | 05 Feb 25 03:09 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-653669 -- sudo        | cert-options-653669       | jenkins | v1.35.0 | 05 Feb 25 03:09 UTC | 05 Feb 25 03:09 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-653669                | cert-options-653669       | jenkins | v1.35.0 | 05 Feb 25 03:09 UTC | 05 Feb 25 03:09 UTC |
	| start   | -p stopped-upgrade-687224             | minikube                  | jenkins | v1.26.0 | 05 Feb 25 03:09 UTC | 05 Feb 25 03:10 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-290619 sudo           | NoKubernetes-290619       | jenkins | v1.35.0 | 05 Feb 25 03:09 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-290619                | NoKubernetes-290619       | jenkins | v1.35.0 | 05 Feb 25 03:09 UTC | 05 Feb 25 03:09 UTC |
	| start   | -p pause-922984 --memory=2048         | pause-922984              | jenkins | v1.35.0 | 05 Feb 25 03:09 UTC | 05 Feb 25 03:11 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-687224 stop           | minikube                  | jenkins | v1.26.0 | 05 Feb 25 03:10 UTC | 05 Feb 25 03:10 UTC |
	| start   | -p stopped-upgrade-687224             | stopped-upgrade-687224    | jenkins | v1.35.0 | 05 Feb 25 03:10 UTC | 05 Feb 25 03:11 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-922984                       | pause-922984              | jenkins | v1.35.0 | 05 Feb 25 03:11 UTC | 05 Feb 25 03:12 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-687224             | stopped-upgrade-687224    | jenkins | v1.35.0 | 05 Feb 25 03:11 UTC | 05 Feb 25 03:11 UTC |
	| start   | -p old-k8s-version-191773             | old-k8s-version-191773    | jenkins | v1.35.0 | 05 Feb 25 03:11 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --kvm-network=default                 |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system         |                           |         |         |                     |                     |
	|         | --disable-driver-mounts               |                           |         |         |                     |                     |
	|         | --keep-context=false                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	| start   | -p cert-expiration-908105             | cert-expiration-908105    | jenkins | v1.35.0 | 05 Feb 25 03:12 UTC |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/05 03:12:15
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0205 03:12:15.642511   61086 out.go:345] Setting OutFile to fd 1 ...
	I0205 03:12:15.642648   61086 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 03:12:15.642651   61086 out.go:358] Setting ErrFile to fd 2...
	I0205 03:12:15.642655   61086 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 03:12:15.642843   61086 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20363-12788/.minikube/bin
	I0205 03:12:15.643452   61086 out.go:352] Setting JSON to false
	I0205 03:12:15.644562   61086 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6887,"bootTime":1738718249,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0205 03:12:15.644618   61086 start.go:139] virtualization: kvm guest
	I0205 03:12:15.647171   61086 out.go:177] * [cert-expiration-908105] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0205 03:12:15.648499   61086 notify.go:220] Checking for updates...
	I0205 03:12:15.648506   61086 out.go:177]   - MINIKUBE_LOCATION=20363
	I0205 03:12:15.649713   61086 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0205 03:12:15.651047   61086 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20363-12788/kubeconfig
	I0205 03:12:15.652372   61086 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20363-12788/.minikube
	I0205 03:12:15.653543   61086 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0205 03:12:15.654791   61086 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0205 03:12:14.044494   60782 out.go:235]   - Generating certificates and keys ...
	I0205 03:12:14.044638   60782 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0205 03:12:14.044747   60782 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0205 03:12:14.044894   60782 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0205 03:12:14.162900   60782 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0205 03:12:14.401545   60782 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0205 03:12:14.816155   60782 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0205 03:12:15.071858   60782 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0205 03:12:15.072071   60782 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-191773] and IPs [192.168.39.74 127.0.0.1 ::1]
	I0205 03:12:15.223663   60782 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0205 03:12:15.224178   60782 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-191773] and IPs [192.168.39.74 127.0.0.1 ::1]
	I0205 03:12:15.611954   60782 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0205 03:12:15.656305   61086 config.go:182] Loaded profile config "cert-expiration-908105": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 03:12:15.656691   61086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:12:15.656732   61086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:12:15.672278   61086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37903
	I0205 03:12:15.672741   61086 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:12:15.673311   61086 main.go:141] libmachine: Using API Version  1
	I0205 03:12:15.673331   61086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:12:15.673690   61086 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:12:15.673871   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .DriverName
	I0205 03:12:15.674078   61086 driver.go:394] Setting default libvirt URI to qemu:///system
	I0205 03:12:15.674366   61086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:12:15.674409   61086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:12:15.689228   61086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42527
	I0205 03:12:15.689674   61086 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:12:15.690164   61086 main.go:141] libmachine: Using API Version  1
	I0205 03:12:15.690176   61086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:12:15.690504   61086 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:12:15.690662   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .DriverName
	I0205 03:12:15.729325   61086 out.go:177] * Using the kvm2 driver based on existing profile
	I0205 03:12:15.730629   61086 start.go:297] selected driver: kvm2
	I0205 03:12:15.730639   61086 start.go:901] validating driver "kvm2" against &{Name:cert-expiration-908105 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:cert-expiration-
908105 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.120 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0205 03:12:15.730770   61086 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0205 03:12:15.731714   61086 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0205 03:12:15.731788   61086 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20363-12788/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0205 03:12:15.747773   61086 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0205 03:12:15.748245   61086 cni.go:84] Creating CNI manager for ""
	I0205 03:12:15.748287   61086 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0205 03:12:15.748336   61086 start.go:340] cluster config:
	{Name:cert-expiration-908105 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:cert-expiration-908105 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.120 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0205 03:12:15.748441   61086 iso.go:125] acquiring lock: {Name:mk486603e8d6546d81b9e7d0a893261360630790 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0205 03:12:15.750663   61086 out.go:177] * Starting "cert-expiration-908105" primary control-plane node in "cert-expiration-908105" cluster
	I0205 03:12:15.751836   61086 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0205 03:12:15.751870   61086 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20363-12788/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0205 03:12:15.751876   61086 cache.go:56] Caching tarball of preloaded images
	I0205 03:12:15.751941   61086 preload.go:172] Found /home/jenkins/minikube-integration/20363-12788/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0205 03:12:15.751947   61086 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0205 03:12:15.752023   61086 profile.go:143] Saving config to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/config.json ...
	I0205 03:12:15.752228   61086 start.go:360] acquireMachinesLock for cert-expiration-908105: {Name:mka859d8706e94e04a549fcebb98cdac86bfe5a9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0205 03:12:15.752268   61086 start.go:364] duration metric: took 28.182µs to acquireMachinesLock for "cert-expiration-908105"
	I0205 03:12:15.752278   61086 start.go:96] Skipping create...Using existing machine configuration
	I0205 03:12:15.752281   61086 fix.go:54] fixHost starting: 
	I0205 03:12:15.752548   61086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:12:15.752577   61086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:12:15.767886   61086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36873
	I0205 03:12:15.768247   61086 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:12:15.768673   61086 main.go:141] libmachine: Using API Version  1
	I0205 03:12:15.768689   61086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:12:15.768981   61086 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:12:15.769175   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .DriverName
	I0205 03:12:15.769314   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetState
	I0205 03:12:15.770827   61086 fix.go:112] recreateIfNeeded on cert-expiration-908105: state=Running err=<nil>
	W0205 03:12:15.770838   61086 fix.go:138] unexpected machine state, will restart: <nil>
	I0205 03:12:15.772753   61086 out.go:177] * Updating the running kvm2 "cert-expiration-908105" VM ...
	I0205 03:12:15.730142   60782 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0205 03:12:15.805509   60782 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0205 03:12:15.805952   60782 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0205 03:12:15.893294   60782 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0205 03:12:15.998614   60782 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0205 03:12:16.213085   60782 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0205 03:12:16.406193   60782 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0205 03:12:16.426712   60782 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0205 03:12:16.428908   60782 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0205 03:12:16.429005   60782 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0205 03:12:16.595620   60782 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0205 03:12:15.284936   58877 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0205 03:12:15.285232   58877 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0205 03:12:15.285256   58877 kubeadm.go:310] 
	I0205 03:12:15.285312   58877 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0205 03:12:15.285414   58877 kubeadm.go:310] 		timed out waiting for the condition
	I0205 03:12:15.285448   58877 kubeadm.go:310] 
	I0205 03:12:15.285506   58877 kubeadm.go:310] 	This error is likely caused by:
	I0205 03:12:15.285551   58877 kubeadm.go:310] 		- The kubelet is not running
	I0205 03:12:15.285699   58877 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0205 03:12:15.285733   58877 kubeadm.go:310] 
	I0205 03:12:15.285873   58877 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0205 03:12:15.285922   58877 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0205 03:12:15.285965   58877 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0205 03:12:15.285974   58877 kubeadm.go:310] 
	I0205 03:12:15.286094   58877 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0205 03:12:15.286194   58877 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0205 03:12:15.286206   58877 kubeadm.go:310] 
	I0205 03:12:15.286334   58877 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0205 03:12:15.286453   58877 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0205 03:12:15.286551   58877 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0205 03:12:15.286639   58877 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0205 03:12:15.286651   58877 kubeadm.go:310] 
	I0205 03:12:15.286812   58877 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0205 03:12:15.286920   58877 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0205 03:12:15.287004   58877 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0205 03:12:15.287144   58877 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-024079 localhost] and IPs [192.168.61.227 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-024079 localhost] and IPs [192.168.61.227 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0205 03:12:15.287193   58877 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0205 03:12:16.366363   58877 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.079134587s)
	I0205 03:12:16.366450   58877 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0205 03:12:16.380456   58877 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0205 03:12:16.390034   58877 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0205 03:12:16.390060   58877 kubeadm.go:157] found existing configuration files:
	
	I0205 03:12:16.390106   58877 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0205 03:12:16.398997   58877 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0205 03:12:16.399054   58877 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0205 03:12:16.409431   58877 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0205 03:12:16.420730   58877 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0205 03:12:16.420798   58877 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0205 03:12:16.432957   58877 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0205 03:12:16.441938   58877 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0205 03:12:16.442017   58877 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0205 03:12:16.452609   58877 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0205 03:12:16.463862   58877 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0205 03:12:16.463948   58877 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0205 03:12:16.476971   58877 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0205 03:12:16.555608   58877 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0205 03:12:16.555761   58877 kubeadm.go:310] [preflight] Running pre-flight checks
	I0205 03:12:16.735232   58877 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0205 03:12:16.735377   58877 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0205 03:12:16.735482   58877 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0205 03:12:16.930863   58877 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0205 03:12:16.932632   58877 out.go:235]   - Generating certificates and keys ...
	I0205 03:12:16.932734   58877 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0205 03:12:16.932818   58877 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0205 03:12:16.932950   58877 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0205 03:12:16.933040   58877 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0205 03:12:16.933137   58877 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0205 03:12:16.933235   58877 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0205 03:12:16.933372   58877 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0205 03:12:16.933474   58877 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0205 03:12:16.933589   58877 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0205 03:12:16.933705   58877 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0205 03:12:16.933767   58877 kubeadm.go:310] [certs] Using the existing "sa" key
	I0205 03:12:16.933848   58877 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0205 03:12:16.992914   58877 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0205 03:12:17.050399   58877 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0205 03:12:17.166345   58877 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0205 03:12:17.371653   58877 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0205 03:12:17.385934   58877 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0205 03:12:17.387054   58877 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0205 03:12:17.387133   58877 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0205 03:12:17.524773   58877 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0205 03:12:15.040615   60573 pod_ready.go:103] pod "etcd-pause-922984" in "kube-system" namespace has status "Ready":"False"
	I0205 03:12:17.018319   60573 pod_ready.go:93] pod "etcd-pause-922984" in "kube-system" namespace has status "Ready":"True"
	I0205 03:12:17.018349   60573 pod_ready.go:82] duration metric: took 4.009957995s for pod "etcd-pause-922984" in "kube-system" namespace to be "Ready" ...
	I0205 03:12:17.018364   60573 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-pause-922984" in "kube-system" namespace to be "Ready" ...
	I0205 03:12:19.023649   60573 pod_ready.go:103] pod "kube-apiserver-pause-922984" in "kube-system" namespace has status "Ready":"False"
	I0205 03:12:17.526688   58877 out.go:235]   - Booting up control plane ...
	I0205 03:12:17.526814   58877 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0205 03:12:17.529207   58877 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0205 03:12:17.531542   58877 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0205 03:12:17.532560   58877 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0205 03:12:17.538131   58877 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0205 03:12:15.773860   61086 machine.go:93] provisionDockerMachine start ...
	I0205 03:12:15.773873   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .DriverName
	I0205 03:12:15.774044   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHHostname
	I0205 03:12:15.776762   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | domain cert-expiration-908105 has defined MAC address 52:54:00:a1:67:e2 in network mk-cert-expiration-908105
	I0205 03:12:15.777174   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:67:e2", ip: ""} in network mk-cert-expiration-908105: {Iface:virbr1 ExpiryTime:2025-02-05 04:08:47 +0000 UTC Type:0 Mac:52:54:00:a1:67:e2 Iaid: IPaddr:192.168.72.120 Prefix:24 Hostname:cert-expiration-908105 Clientid:01:52:54:00:a1:67:e2}
	I0205 03:12:15.777205   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | domain cert-expiration-908105 has defined IP address 192.168.72.120 and MAC address 52:54:00:a1:67:e2 in network mk-cert-expiration-908105
	I0205 03:12:15.777362   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHPort
	I0205 03:12:15.777527   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHKeyPath
	I0205 03:12:15.777681   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHKeyPath
	I0205 03:12:15.777799   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHUsername
	I0205 03:12:15.777975   61086 main.go:141] libmachine: Using SSH client type: native
	I0205 03:12:15.778139   61086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.120 22 <nil> <nil>}
	I0205 03:12:15.778144   61086 main.go:141] libmachine: About to run SSH command:
	hostname
	I0205 03:12:15.882368   61086 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-908105
	
	I0205 03:12:15.882387   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetMachineName
	I0205 03:12:15.882628   61086 buildroot.go:166] provisioning hostname "cert-expiration-908105"
	I0205 03:12:15.882647   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetMachineName
	I0205 03:12:15.882789   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHHostname
	I0205 03:12:15.885728   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | domain cert-expiration-908105 has defined MAC address 52:54:00:a1:67:e2 in network mk-cert-expiration-908105
	I0205 03:12:15.886074   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:67:e2", ip: ""} in network mk-cert-expiration-908105: {Iface:virbr1 ExpiryTime:2025-02-05 04:08:47 +0000 UTC Type:0 Mac:52:54:00:a1:67:e2 Iaid: IPaddr:192.168.72.120 Prefix:24 Hostname:cert-expiration-908105 Clientid:01:52:54:00:a1:67:e2}
	I0205 03:12:15.886096   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | domain cert-expiration-908105 has defined IP address 192.168.72.120 and MAC address 52:54:00:a1:67:e2 in network mk-cert-expiration-908105
	I0205 03:12:15.886261   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHPort
	I0205 03:12:15.886438   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHKeyPath
	I0205 03:12:15.886637   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHKeyPath
	I0205 03:12:15.886773   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHUsername
	I0205 03:12:15.886938   61086 main.go:141] libmachine: Using SSH client type: native
	I0205 03:12:15.887089   61086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.120 22 <nil> <nil>}
	I0205 03:12:15.887095   61086 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-908105 && echo "cert-expiration-908105" | sudo tee /etc/hostname
	I0205 03:12:16.004802   61086 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-908105
	
	I0205 03:12:16.004821   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHHostname
	I0205 03:12:16.007856   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | domain cert-expiration-908105 has defined MAC address 52:54:00:a1:67:e2 in network mk-cert-expiration-908105
	I0205 03:12:16.008268   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:67:e2", ip: ""} in network mk-cert-expiration-908105: {Iface:virbr1 ExpiryTime:2025-02-05 04:08:47 +0000 UTC Type:0 Mac:52:54:00:a1:67:e2 Iaid: IPaddr:192.168.72.120 Prefix:24 Hostname:cert-expiration-908105 Clientid:01:52:54:00:a1:67:e2}
	I0205 03:12:16.008312   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | domain cert-expiration-908105 has defined IP address 192.168.72.120 and MAC address 52:54:00:a1:67:e2 in network mk-cert-expiration-908105
	I0205 03:12:16.008498   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHPort
	I0205 03:12:16.008708   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHKeyPath
	I0205 03:12:16.008903   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHKeyPath
	I0205 03:12:16.009072   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHUsername
	I0205 03:12:16.009267   61086 main.go:141] libmachine: Using SSH client type: native
	I0205 03:12:16.009501   61086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.120 22 <nil> <nil>}
	I0205 03:12:16.009520   61086 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-908105' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-908105/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-908105' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0205 03:12:16.118224   61086 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0205 03:12:16.118243   61086 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20363-12788/.minikube CaCertPath:/home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20363-12788/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20363-12788/.minikube}
	I0205 03:12:16.118291   61086 buildroot.go:174] setting up certificates
	I0205 03:12:16.118300   61086 provision.go:84] configureAuth start
	I0205 03:12:16.118311   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetMachineName
	I0205 03:12:16.118588   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetIP
	I0205 03:12:16.121597   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | domain cert-expiration-908105 has defined MAC address 52:54:00:a1:67:e2 in network mk-cert-expiration-908105
	I0205 03:12:16.121946   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:67:e2", ip: ""} in network mk-cert-expiration-908105: {Iface:virbr1 ExpiryTime:2025-02-05 04:08:47 +0000 UTC Type:0 Mac:52:54:00:a1:67:e2 Iaid: IPaddr:192.168.72.120 Prefix:24 Hostname:cert-expiration-908105 Clientid:01:52:54:00:a1:67:e2}
	I0205 03:12:16.121982   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | domain cert-expiration-908105 has defined IP address 192.168.72.120 and MAC address 52:54:00:a1:67:e2 in network mk-cert-expiration-908105
	I0205 03:12:16.122118   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHHostname
	I0205 03:12:16.124413   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | domain cert-expiration-908105 has defined MAC address 52:54:00:a1:67:e2 in network mk-cert-expiration-908105
	I0205 03:12:16.124791   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:67:e2", ip: ""} in network mk-cert-expiration-908105: {Iface:virbr1 ExpiryTime:2025-02-05 04:08:47 +0000 UTC Type:0 Mac:52:54:00:a1:67:e2 Iaid: IPaddr:192.168.72.120 Prefix:24 Hostname:cert-expiration-908105 Clientid:01:52:54:00:a1:67:e2}
	I0205 03:12:16.124813   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | domain cert-expiration-908105 has defined IP address 192.168.72.120 and MAC address 52:54:00:a1:67:e2 in network mk-cert-expiration-908105
	I0205 03:12:16.124942   61086 provision.go:143] copyHostCerts
	I0205 03:12:16.125000   61086 exec_runner.go:144] found /home/jenkins/minikube-integration/20363-12788/.minikube/cert.pem, removing ...
	I0205 03:12:16.125008   61086 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20363-12788/.minikube/cert.pem
	I0205 03:12:16.125060   61086 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20363-12788/.minikube/cert.pem (1123 bytes)
	I0205 03:12:16.125135   61086 exec_runner.go:144] found /home/jenkins/minikube-integration/20363-12788/.minikube/key.pem, removing ...
	I0205 03:12:16.125138   61086 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20363-12788/.minikube/key.pem
	I0205 03:12:16.125160   61086 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20363-12788/.minikube/key.pem (1675 bytes)
	I0205 03:12:16.125206   61086 exec_runner.go:144] found /home/jenkins/minikube-integration/20363-12788/.minikube/ca.pem, removing ...
	I0205 03:12:16.125209   61086 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20363-12788/.minikube/ca.pem
	I0205 03:12:16.125225   61086 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20363-12788/.minikube/ca.pem (1082 bytes)
	I0205 03:12:16.125266   61086 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20363-12788/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-908105 san=[127.0.0.1 192.168.72.120 cert-expiration-908105 localhost minikube]
	I0205 03:12:16.283216   61086 provision.go:177] copyRemoteCerts
	I0205 03:12:16.283259   61086 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0205 03:12:16.283278   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHHostname
	I0205 03:12:16.286164   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | domain cert-expiration-908105 has defined MAC address 52:54:00:a1:67:e2 in network mk-cert-expiration-908105
	I0205 03:12:16.286504   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:67:e2", ip: ""} in network mk-cert-expiration-908105: {Iface:virbr1 ExpiryTime:2025-02-05 04:08:47 +0000 UTC Type:0 Mac:52:54:00:a1:67:e2 Iaid: IPaddr:192.168.72.120 Prefix:24 Hostname:cert-expiration-908105 Clientid:01:52:54:00:a1:67:e2}
	I0205 03:12:16.286527   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | domain cert-expiration-908105 has defined IP address 192.168.72.120 and MAC address 52:54:00:a1:67:e2 in network mk-cert-expiration-908105
	I0205 03:12:16.286678   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHPort
	I0205 03:12:16.286868   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHKeyPath
	I0205 03:12:16.286998   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHUsername
	I0205 03:12:16.287109   61086 sshutil.go:53] new ssh client: &{IP:192.168.72.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/cert-expiration-908105/id_rsa Username:docker}
	I0205 03:12:16.370983   61086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0205 03:12:16.398537   61086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0205 03:12:16.426606   61086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0205 03:12:16.456607   61086 provision.go:87] duration metric: took 338.295013ms to configureAuth
	I0205 03:12:16.456626   61086 buildroot.go:189] setting minikube options for container-runtime
	I0205 03:12:16.456834   61086 config.go:182] Loaded profile config "cert-expiration-908105": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 03:12:16.456923   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHHostname
	I0205 03:12:16.460461   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | domain cert-expiration-908105 has defined MAC address 52:54:00:a1:67:e2 in network mk-cert-expiration-908105
	I0205 03:12:16.460895   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:67:e2", ip: ""} in network mk-cert-expiration-908105: {Iface:virbr1 ExpiryTime:2025-02-05 04:08:47 +0000 UTC Type:0 Mac:52:54:00:a1:67:e2 Iaid: IPaddr:192.168.72.120 Prefix:24 Hostname:cert-expiration-908105 Clientid:01:52:54:00:a1:67:e2}
	I0205 03:12:16.460921   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | domain cert-expiration-908105 has defined IP address 192.168.72.120 and MAC address 52:54:00:a1:67:e2 in network mk-cert-expiration-908105
	I0205 03:12:16.461144   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHPort
	I0205 03:12:16.461360   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHKeyPath
	I0205 03:12:16.461515   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHKeyPath
	I0205 03:12:16.461616   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHUsername
	I0205 03:12:16.461733   61086 main.go:141] libmachine: Using SSH client type: native
	I0205 03:12:16.461901   61086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.120 22 <nil> <nil>}
	I0205 03:12:16.461909   61086 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0205 03:12:16.597537   60782 out.go:235]   - Booting up control plane ...
	I0205 03:12:16.597694   60782 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0205 03:12:16.605308   60782 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0205 03:12:16.606721   60782 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0205 03:12:16.609285   60782 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0205 03:12:16.613978   60782 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0205 03:12:21.026158   60573 pod_ready.go:103] pod "kube-apiserver-pause-922984" in "kube-system" namespace has status "Ready":"False"
	I0205 03:12:21.524745   60573 pod_ready.go:93] pod "kube-apiserver-pause-922984" in "kube-system" namespace has status "Ready":"True"
	I0205 03:12:21.524773   60573 pod_ready.go:82] duration metric: took 4.506399667s for pod "kube-apiserver-pause-922984" in "kube-system" namespace to be "Ready" ...
	I0205 03:12:21.524788   60573 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-pause-922984" in "kube-system" namespace to be "Ready" ...
	I0205 03:12:21.529193   60573 pod_ready.go:93] pod "kube-controller-manager-pause-922984" in "kube-system" namespace has status "Ready":"True"
	I0205 03:12:21.529218   60573 pod_ready.go:82] duration metric: took 4.422836ms for pod "kube-controller-manager-pause-922984" in "kube-system" namespace to be "Ready" ...
	I0205 03:12:21.529227   60573 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-dwrtm" in "kube-system" namespace to be "Ready" ...
	I0205 03:12:21.532919   60573 pod_ready.go:93] pod "kube-proxy-dwrtm" in "kube-system" namespace has status "Ready":"True"
	I0205 03:12:21.532936   60573 pod_ready.go:82] duration metric: took 3.703892ms for pod "kube-proxy-dwrtm" in "kube-system" namespace to be "Ready" ...
	I0205 03:12:21.532944   60573 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-pause-922984" in "kube-system" namespace to be "Ready" ...
	I0205 03:12:23.542321   60573 pod_ready.go:93] pod "kube-scheduler-pause-922984" in "kube-system" namespace has status "Ready":"True"
	I0205 03:12:23.542345   60573 pod_ready.go:82] duration metric: took 2.009394124s for pod "kube-scheduler-pause-922984" in "kube-system" namespace to be "Ready" ...
	I0205 03:12:23.542358   60573 pod_ready.go:39] duration metric: took 12.559300813s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0205 03:12:23.542375   60573 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0205 03:12:23.554545   60573 ops.go:34] apiserver oom_adj: -16
	I0205 03:12:23.554568   60573 kubeadm.go:597] duration metric: took 33.441182227s to restartPrimaryControlPlane
	I0205 03:12:23.554577   60573 kubeadm.go:394] duration metric: took 33.624847083s to StartCluster
	I0205 03:12:23.554594   60573 settings.go:142] acquiring lock: {Name:mk2eca847da5ba78f5b041a83e5cfcbdebb0c621 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:12:23.554669   60573 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20363-12788/kubeconfig
	I0205 03:12:23.555668   60573 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/kubeconfig: {Name:mkb405c4292c681fd728af1af9684132d7e45754 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:12:23.555912   60573 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.73 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0205 03:12:23.556048   60573 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0205 03:12:23.556180   60573 config.go:182] Loaded profile config "pause-922984": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 03:12:23.557404   60573 out.go:177] * Verifying Kubernetes components...
	I0205 03:12:23.557406   60573 out.go:177] * Enabled addons: 
	I0205 03:12:22.066812   61086 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0205 03:12:22.066829   61086 machine.go:96] duration metric: took 6.292961729s to provisionDockerMachine
	I0205 03:12:22.066842   61086 start.go:293] postStartSetup for "cert-expiration-908105" (driver="kvm2")
	I0205 03:12:22.066853   61086 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0205 03:12:22.066869   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .DriverName
	I0205 03:12:22.067295   61086 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0205 03:12:22.067322   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHHostname
	I0205 03:12:22.070707   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | domain cert-expiration-908105 has defined MAC address 52:54:00:a1:67:e2 in network mk-cert-expiration-908105
	I0205 03:12:22.071190   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:67:e2", ip: ""} in network mk-cert-expiration-908105: {Iface:virbr1 ExpiryTime:2025-02-05 04:08:47 +0000 UTC Type:0 Mac:52:54:00:a1:67:e2 Iaid: IPaddr:192.168.72.120 Prefix:24 Hostname:cert-expiration-908105 Clientid:01:52:54:00:a1:67:e2}
	I0205 03:12:22.071222   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | domain cert-expiration-908105 has defined IP address 192.168.72.120 and MAC address 52:54:00:a1:67:e2 in network mk-cert-expiration-908105
	I0205 03:12:22.071425   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHPort
	I0205 03:12:22.071613   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHKeyPath
	I0205 03:12:22.071801   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHUsername
	I0205 03:12:22.071933   61086 sshutil.go:53] new ssh client: &{IP:192.168.72.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/cert-expiration-908105/id_rsa Username:docker}
	I0205 03:12:22.151890   61086 ssh_runner.go:195] Run: cat /etc/os-release
	I0205 03:12:22.156159   61086 info.go:137] Remote host: Buildroot 2023.02.9
	I0205 03:12:22.156177   61086 filesync.go:126] Scanning /home/jenkins/minikube-integration/20363-12788/.minikube/addons for local assets ...
	I0205 03:12:22.156239   61086 filesync.go:126] Scanning /home/jenkins/minikube-integration/20363-12788/.minikube/files for local assets ...
	I0205 03:12:22.156301   61086 filesync.go:149] local asset: /home/jenkins/minikube-integration/20363-12788/.minikube/files/etc/ssl/certs/199892.pem -> 199892.pem in /etc/ssl/certs
	I0205 03:12:22.156376   61086 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0205 03:12:22.166632   61086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/files/etc/ssl/certs/199892.pem --> /etc/ssl/certs/199892.pem (1708 bytes)
	I0205 03:12:22.190167   61086 start.go:296] duration metric: took 123.314106ms for postStartSetup
	I0205 03:12:22.190215   61086 fix.go:56] duration metric: took 6.437916565s for fixHost
	I0205 03:12:22.190231   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHHostname
	I0205 03:12:22.192818   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | domain cert-expiration-908105 has defined MAC address 52:54:00:a1:67:e2 in network mk-cert-expiration-908105
	I0205 03:12:22.193137   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:67:e2", ip: ""} in network mk-cert-expiration-908105: {Iface:virbr1 ExpiryTime:2025-02-05 04:08:47 +0000 UTC Type:0 Mac:52:54:00:a1:67:e2 Iaid: IPaddr:192.168.72.120 Prefix:24 Hostname:cert-expiration-908105 Clientid:01:52:54:00:a1:67:e2}
	I0205 03:12:22.193150   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | domain cert-expiration-908105 has defined IP address 192.168.72.120 and MAC address 52:54:00:a1:67:e2 in network mk-cert-expiration-908105
	I0205 03:12:22.193321   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHPort
	I0205 03:12:22.193553   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHKeyPath
	I0205 03:12:22.193686   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHKeyPath
	I0205 03:12:22.193787   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHUsername
	I0205 03:12:22.193874   61086 main.go:141] libmachine: Using SSH client type: native
	I0205 03:12:22.194031   61086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.120 22 <nil> <nil>}
	I0205 03:12:22.194035   61086 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0205 03:12:22.293942   61086 main.go:141] libmachine: SSH cmd err, output: <nil>: 1738725142.287651267
	
	I0205 03:12:22.293956   61086 fix.go:216] guest clock: 1738725142.287651267
	I0205 03:12:22.293964   61086 fix.go:229] Guest: 2025-02-05 03:12:22.287651267 +0000 UTC Remote: 2025-02-05 03:12:22.190217298 +0000 UTC m=+6.590301571 (delta=97.433969ms)
	I0205 03:12:22.293990   61086 fix.go:200] guest clock delta is within tolerance: 97.433969ms
	I0205 03:12:22.293995   61086 start.go:83] releasing machines lock for "cert-expiration-908105", held for 6.541722078s
	I0205 03:12:22.294019   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .DriverName
	I0205 03:12:22.294319   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetIP
	I0205 03:12:22.296797   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | domain cert-expiration-908105 has defined MAC address 52:54:00:a1:67:e2 in network mk-cert-expiration-908105
	I0205 03:12:22.297093   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:67:e2", ip: ""} in network mk-cert-expiration-908105: {Iface:virbr1 ExpiryTime:2025-02-05 04:08:47 +0000 UTC Type:0 Mac:52:54:00:a1:67:e2 Iaid: IPaddr:192.168.72.120 Prefix:24 Hostname:cert-expiration-908105 Clientid:01:52:54:00:a1:67:e2}
	I0205 03:12:22.297116   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | domain cert-expiration-908105 has defined IP address 192.168.72.120 and MAC address 52:54:00:a1:67:e2 in network mk-cert-expiration-908105
	I0205 03:12:22.297195   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .DriverName
	I0205 03:12:22.297742   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .DriverName
	I0205 03:12:22.297920   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .DriverName
	I0205 03:12:22.298014   61086 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0205 03:12:22.298054   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHHostname
	I0205 03:12:22.298112   61086 ssh_runner.go:195] Run: cat /version.json
	I0205 03:12:22.298128   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHHostname
	I0205 03:12:22.300589   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | domain cert-expiration-908105 has defined MAC address 52:54:00:a1:67:e2 in network mk-cert-expiration-908105
	I0205 03:12:22.300925   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:67:e2", ip: ""} in network mk-cert-expiration-908105: {Iface:virbr1 ExpiryTime:2025-02-05 04:08:47 +0000 UTC Type:0 Mac:52:54:00:a1:67:e2 Iaid: IPaddr:192.168.72.120 Prefix:24 Hostname:cert-expiration-908105 Clientid:01:52:54:00:a1:67:e2}
	I0205 03:12:22.300989   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | domain cert-expiration-908105 has defined IP address 192.168.72.120 and MAC address 52:54:00:a1:67:e2 in network mk-cert-expiration-908105
	I0205 03:12:22.301006   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | domain cert-expiration-908105 has defined MAC address 52:54:00:a1:67:e2 in network mk-cert-expiration-908105
	I0205 03:12:22.301083   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHPort
	I0205 03:12:22.301233   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHKeyPath
	I0205 03:12:22.301398   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHUsername
	I0205 03:12:22.301435   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:67:e2", ip: ""} in network mk-cert-expiration-908105: {Iface:virbr1 ExpiryTime:2025-02-05 04:08:47 +0000 UTC Type:0 Mac:52:54:00:a1:67:e2 Iaid: IPaddr:192.168.72.120 Prefix:24 Hostname:cert-expiration-908105 Clientid:01:52:54:00:a1:67:e2}
	I0205 03:12:22.301449   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | domain cert-expiration-908105 has defined IP address 192.168.72.120 and MAC address 52:54:00:a1:67:e2 in network mk-cert-expiration-908105
	I0205 03:12:22.301515   61086 sshutil.go:53] new ssh client: &{IP:192.168.72.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/cert-expiration-908105/id_rsa Username:docker}
	I0205 03:12:22.301630   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHPort
	I0205 03:12:22.301755   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHKeyPath
	I0205 03:12:22.301879   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHUsername
	I0205 03:12:22.301984   61086 sshutil.go:53] new ssh client: &{IP:192.168.72.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/cert-expiration-908105/id_rsa Username:docker}
	I0205 03:12:22.394716   61086 ssh_runner.go:195] Run: systemctl --version
	I0205 03:12:22.400355   61086 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0205 03:12:22.557683   61086 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0205 03:12:22.563537   61086 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0205 03:12:22.563603   61086 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0205 03:12:22.572849   61086 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0205 03:12:22.572861   61086 start.go:495] detecting cgroup driver to use...
	I0205 03:12:22.572928   61086 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0205 03:12:22.589051   61086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0205 03:12:22.602859   61086 docker.go:217] disabling cri-docker service (if available) ...
	I0205 03:12:22.602905   61086 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0205 03:12:22.616940   61086 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0205 03:12:22.630908   61086 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0205 03:12:22.767095   61086 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0205 03:12:22.896881   61086 docker.go:233] disabling docker service ...
	I0205 03:12:22.896933   61086 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0205 03:12:22.914125   61086 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0205 03:12:22.928198   61086 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0205 03:12:23.071383   61086 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0205 03:12:23.216997   61086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0205 03:12:23.230933   61086 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0205 03:12:23.249243   61086 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0205 03:12:23.249289   61086 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:12:23.259366   61086 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0205 03:12:23.259433   61086 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:12:23.271169   61086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:12:23.282778   61086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:12:23.293270   61086 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0205 03:12:23.303778   61086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:12:23.313936   61086 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:12:23.324281   61086 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:12:23.334099   61086 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0205 03:12:23.343253   61086 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0205 03:12:23.352797   61086 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0205 03:12:23.504162   61086 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0205 03:12:23.732871   61086 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0205 03:12:23.732932   61086 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0205 03:12:23.737776   61086 start.go:563] Will wait 60s for crictl version
	I0205 03:12:23.737825   61086 ssh_runner.go:195] Run: which crictl
	I0205 03:12:23.743163   61086 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0205 03:12:23.794191   61086 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0205 03:12:23.794276   61086 ssh_runner.go:195] Run: crio --version
	I0205 03:12:23.821765   61086 ssh_runner.go:195] Run: crio --version
	I0205 03:12:23.850532   61086 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0205 03:12:23.559113   60573 addons.go:514] duration metric: took 3.071416ms for enable addons: enabled=[]
	I0205 03:12:23.559150   60573 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0205 03:12:23.747384   60573 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0205 03:12:23.765521   60573 node_ready.go:35] waiting up to 6m0s for node "pause-922984" to be "Ready" ...
	I0205 03:12:23.768751   60573 node_ready.go:49] node "pause-922984" has status "Ready":"True"
	I0205 03:12:23.768772   60573 node_ready.go:38] duration metric: took 3.214431ms for node "pause-922984" to be "Ready" ...
	I0205 03:12:23.768781   60573 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0205 03:12:23.771858   60573 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-wtrdd" in "kube-system" namespace to be "Ready" ...
	I0205 03:12:23.776541   60573 pod_ready.go:93] pod "coredns-668d6bf9bc-wtrdd" in "kube-system" namespace has status "Ready":"True"
	I0205 03:12:23.776564   60573 pod_ready.go:82] duration metric: took 4.678344ms for pod "coredns-668d6bf9bc-wtrdd" in "kube-system" namespace to be "Ready" ...
	I0205 03:12:23.776577   60573 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-922984" in "kube-system" namespace to be "Ready" ...
	I0205 03:12:23.922639   60573 pod_ready.go:93] pod "etcd-pause-922984" in "kube-system" namespace has status "Ready":"True"
	I0205 03:12:23.922675   60573 pod_ready.go:82] duration metric: took 146.090176ms for pod "etcd-pause-922984" in "kube-system" namespace to be "Ready" ...
	I0205 03:12:23.922690   60573 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-922984" in "kube-system" namespace to be "Ready" ...
	I0205 03:12:23.851848   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetIP
	I0205 03:12:23.854665   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | domain cert-expiration-908105 has defined MAC address 52:54:00:a1:67:e2 in network mk-cert-expiration-908105
	I0205 03:12:23.854983   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:67:e2", ip: ""} in network mk-cert-expiration-908105: {Iface:virbr1 ExpiryTime:2025-02-05 04:08:47 +0000 UTC Type:0 Mac:52:54:00:a1:67:e2 Iaid: IPaddr:192.168.72.120 Prefix:24 Hostname:cert-expiration-908105 Clientid:01:52:54:00:a1:67:e2}
	I0205 03:12:23.855020   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | domain cert-expiration-908105 has defined IP address 192.168.72.120 and MAC address 52:54:00:a1:67:e2 in network mk-cert-expiration-908105
	I0205 03:12:23.855258   61086 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0205 03:12:23.859585   61086 kubeadm.go:883] updating cluster {Name:cert-expiration-908105 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:cert-expiration-908105 Namespac
e:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.120 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0205 03:12:23.859678   61086 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0205 03:12:23.859717   61086 ssh_runner.go:195] Run: sudo crictl images --output json
	I0205 03:12:23.907016   61086 crio.go:514] all images are preloaded for cri-o runtime.
	I0205 03:12:23.907025   61086 crio.go:433] Images already preloaded, skipping extraction
	I0205 03:12:23.907069   61086 ssh_runner.go:195] Run: sudo crictl images --output json
	I0205 03:12:23.940204   61086 crio.go:514] all images are preloaded for cri-o runtime.
	I0205 03:12:23.940215   61086 cache_images.go:84] Images are preloaded, skipping loading
	I0205 03:12:23.940220   61086 kubeadm.go:934] updating node { 192.168.72.120 8443 v1.32.1 crio true true} ...
	I0205 03:12:23.940342   61086 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=cert-expiration-908105 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.120
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:cert-expiration-908105 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0205 03:12:23.940402   61086 ssh_runner.go:195] Run: crio config
	I0205 03:12:23.992494   61086 cni.go:84] Creating CNI manager for ""
	I0205 03:12:23.992510   61086 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0205 03:12:23.992516   61086 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0205 03:12:23.992533   61086 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.120 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-908105 NodeName:cert-expiration-908105 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.120"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.120 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0205 03:12:23.992639   61086 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.120
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-908105"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.120"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.120"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0205 03:12:23.992690   61086 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0205 03:12:24.002851   61086 binaries.go:44] Found k8s binaries, skipping transfer
	I0205 03:12:24.002904   61086 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0205 03:12:24.012431   61086 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0205 03:12:24.029395   61086 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0205 03:12:24.045896   61086 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2302 bytes)
	I0205 03:12:24.062039   61086 ssh_runner.go:195] Run: grep 192.168.72.120	control-plane.minikube.internal$ /etc/hosts
	I0205 03:12:24.065885   61086 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0205 03:12:24.203022   61086 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0205 03:12:24.245195   61086 certs.go:68] Setting up /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105 for IP: 192.168.72.120
	I0205 03:12:24.245221   61086 certs.go:194] generating shared ca certs ...
	I0205 03:12:24.245240   61086 certs.go:226] acquiring lock for ca certs: {Name:mkca3cbb562bd1b517883c3b07fce8acc6cc9038 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:12:24.245457   61086 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20363-12788/.minikube/ca.key
	I0205 03:12:24.245528   61086 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20363-12788/.minikube/proxy-client-ca.key
	I0205 03:12:24.245538   61086 certs.go:256] generating profile certs ...
	W0205 03:12:24.245703   61086 out.go:270] ! Certificate client.crt has expired. Generating a new one...
	I0205 03:12:24.245729   61086 certs.go:624] cert expired /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/client.crt: expiration: 2025-02-05 03:12:01 +0000 UTC, now: 2025-02-05 03:12:24.245722276 +0000 UTC m=+8.645806550
	I0205 03:12:24.245857   61086 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/client.key
	I0205 03:12:24.245891   61086 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/client.crt with IP's: []
	I0205 03:12:24.476576   61086 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/client.crt ...
	I0205 03:12:24.476593   61086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/client.crt: {Name:mk250fda3344694083c629713ce2867a46dc930f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:12:24.476734   61086 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/client.key ...
	I0205 03:12:24.476741   61086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/client.key: {Name:mkc2aeee635b17e08281429f1208655e82615088 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0205 03:12:24.476887   61086 out.go:270] ! Certificate apiserver.crt.019bc120 has expired. Generating a new one...
	I0205 03:12:24.476905   61086 certs.go:624] cert expired /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/apiserver.crt.019bc120: expiration: 2025-02-05 03:12:01 +0000 UTC, now: 2025-02-05 03:12:24.476900105 +0000 UTC m=+8.876984378
	I0205 03:12:24.476971   61086 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/apiserver.key.019bc120
	I0205 03:12:24.476982   61086 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/apiserver.crt.019bc120 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.120]
	I0205 03:12:24.720526   61086 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/apiserver.crt.019bc120 ...
	I0205 03:12:24.720545   61086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/apiserver.crt.019bc120: {Name:mke2cc689fbdbb37bc1c811ef4041565b8fc4f28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:12:24.720699   61086 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/apiserver.key.019bc120 ...
	I0205 03:12:24.720709   61086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/apiserver.key.019bc120: {Name:mk7a71f9525dc776b581de8e7f122be2f8cd9881 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:12:24.720789   61086 certs.go:381] copying /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/apiserver.crt.019bc120 -> /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/apiserver.crt
	I0205 03:12:24.720955   61086 certs.go:385] copying /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/apiserver.key.019bc120 -> /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/apiserver.key
	W0205 03:12:24.721193   61086 out.go:270] ! Certificate proxy-client.crt has expired. Generating a new one...
	I0205 03:12:24.721215   61086 certs.go:624] cert expired /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/proxy-client.crt: expiration: 2025-02-05 03:12:02 +0000 UTC, now: 2025-02-05 03:12:24.721209953 +0000 UTC m=+9.121294227
	I0205 03:12:24.721291   61086 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/proxy-client.key
	I0205 03:12:24.721309   61086 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/proxy-client.crt with IP's: []
	I0205 03:12:24.903018   61086 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/proxy-client.crt ...
	I0205 03:12:24.903032   61086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/proxy-client.crt: {Name:mkd28c763065f1c0605dffd527607148c50a5ba6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:12:24.903176   61086 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/proxy-client.key ...
	I0205 03:12:24.903185   61086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/proxy-client.key: {Name:mk0f46e4fc691e53e4ec8eb3c97d09feab30b1bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:12:24.903357   61086 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/19989.pem (1338 bytes)
	W0205 03:12:24.903388   61086 certs.go:480] ignoring /home/jenkins/minikube-integration/20363-12788/.minikube/certs/19989_empty.pem, impossibly tiny 0 bytes
	I0205 03:12:24.903394   61086 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca-key.pem (1679 bytes)
	I0205 03:12:24.903416   61086 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem (1082 bytes)
	I0205 03:12:24.903439   61086 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/cert.pem (1123 bytes)
	I0205 03:12:24.903455   61086 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/key.pem (1675 bytes)
	I0205 03:12:24.903486   61086 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/files/etc/ssl/certs/199892.pem (1708 bytes)
	I0205 03:12:24.904053   61086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0205 03:12:25.025071   61086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0205 03:12:25.087638   61086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0205 03:12:25.129554   61086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0205 03:12:25.207315   61086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0205 03:12:25.283196   61086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0205 03:12:25.338405   61086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0205 03:12:25.395551   61086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0205 03:12:25.458211   61086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/certs/19989.pem --> /usr/share/ca-certificates/19989.pem (1338 bytes)
	I0205 03:12:25.487739   61086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/files/etc/ssl/certs/199892.pem --> /usr/share/ca-certificates/199892.pem (1708 bytes)
	I0205 03:12:25.515142   61086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0205 03:12:25.554561   61086 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0205 03:12:25.577287   61086 ssh_runner.go:195] Run: openssl version
	I0205 03:12:25.585367   61086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/199892.pem && ln -fs /usr/share/ca-certificates/199892.pem /etc/ssl/certs/199892.pem"
	I0205 03:12:25.597083   61086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/199892.pem
	I0205 03:12:25.601152   61086 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb  5 02:11 /usr/share/ca-certificates/199892.pem
	I0205 03:12:25.601191   61086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/199892.pem
	I0205 03:12:25.608352   61086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/199892.pem /etc/ssl/certs/3ec20f2e.0"
	I0205 03:12:25.619316   61086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0205 03:12:25.633547   61086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0205 03:12:25.637830   61086 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb  5 02:04 /usr/share/ca-certificates/minikubeCA.pem
	I0205 03:12:25.637865   61086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0205 03:12:24.322413   60573 pod_ready.go:93] pod "kube-apiserver-pause-922984" in "kube-system" namespace has status "Ready":"True"
	I0205 03:12:24.322443   60573 pod_ready.go:82] duration metric: took 399.744838ms for pod "kube-apiserver-pause-922984" in "kube-system" namespace to be "Ready" ...
	I0205 03:12:24.322458   60573 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-922984" in "kube-system" namespace to be "Ready" ...
	I0205 03:12:24.723236   60573 pod_ready.go:93] pod "kube-controller-manager-pause-922984" in "kube-system" namespace has status "Ready":"True"
	I0205 03:12:24.723260   60573 pod_ready.go:82] duration metric: took 400.793029ms for pod "kube-controller-manager-pause-922984" in "kube-system" namespace to be "Ready" ...
	I0205 03:12:24.723272   60573 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dwrtm" in "kube-system" namespace to be "Ready" ...
	I0205 03:12:25.122458   60573 pod_ready.go:93] pod "kube-proxy-dwrtm" in "kube-system" namespace has status "Ready":"True"
	I0205 03:12:25.122486   60573 pod_ready.go:82] duration metric: took 399.207926ms for pod "kube-proxy-dwrtm" in "kube-system" namespace to be "Ready" ...
	I0205 03:12:25.122496   60573 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-922984" in "kube-system" namespace to be "Ready" ...
	I0205 03:12:25.521769   60573 pod_ready.go:93] pod "kube-scheduler-pause-922984" in "kube-system" namespace has status "Ready":"True"
	I0205 03:12:25.521792   60573 pod_ready.go:82] duration metric: took 399.289951ms for pod "kube-scheduler-pause-922984" in "kube-system" namespace to be "Ready" ...
	I0205 03:12:25.521801   60573 pod_ready.go:39] duration metric: took 1.753009653s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0205 03:12:25.521816   60573 api_server.go:52] waiting for apiserver process to appear ...
	I0205 03:12:25.521865   60573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:12:25.542213   60573 api_server.go:72] duration metric: took 1.986261842s to wait for apiserver process to appear ...
	I0205 03:12:25.542241   60573 api_server.go:88] waiting for apiserver healthz status ...
	I0205 03:12:25.542258   60573 api_server.go:253] Checking apiserver healthz at https://192.168.50.73:8443/healthz ...
	I0205 03:12:25.547820   60573 api_server.go:279] https://192.168.50.73:8443/healthz returned 200:
	ok
	I0205 03:12:25.549035   60573 api_server.go:141] control plane version: v1.32.1
	I0205 03:12:25.549056   60573 api_server.go:131] duration metric: took 6.807972ms to wait for apiserver health ...
	I0205 03:12:25.549066   60573 system_pods.go:43] waiting for kube-system pods to appear ...
	I0205 03:12:25.723471   60573 system_pods.go:59] 6 kube-system pods found
	I0205 03:12:25.723502   60573 system_pods.go:61] "coredns-668d6bf9bc-wtrdd" [8021c0ec-321e-485c-8be5-8c775a1a6bd6] Running
	I0205 03:12:25.723508   60573 system_pods.go:61] "etcd-pause-922984" [fd03e340-5cf6-421b-87f4-df40bd77f11b] Running
	I0205 03:12:25.723512   60573 system_pods.go:61] "kube-apiserver-pause-922984" [57b48e9b-32af-423f-a24f-d9778038297f] Running
	I0205 03:12:25.723516   60573 system_pods.go:61] "kube-controller-manager-pause-922984" [9a2deb1a-46f7-4d41-857c-2d5f8874b507] Running
	I0205 03:12:25.723519   60573 system_pods.go:61] "kube-proxy-dwrtm" [5a97e2a0-0706-4603-8471-b77d9645621a] Running
	I0205 03:12:25.723523   60573 system_pods.go:61] "kube-scheduler-pause-922984" [d4bc9b90-e9f4-4af8-adaf-cfe8d027e9d2] Running
	I0205 03:12:25.723529   60573 system_pods.go:74] duration metric: took 174.457454ms to wait for pod list to return data ...
	I0205 03:12:25.723543   60573 default_sa.go:34] waiting for default service account to be created ...
	I0205 03:12:25.922362   60573 default_sa.go:45] found service account: "default"
	I0205 03:12:25.922388   60573 default_sa.go:55] duration metric: took 198.839271ms for default service account to be created ...
	I0205 03:12:25.922399   60573 system_pods.go:116] waiting for k8s-apps to be running ...
	I0205 03:12:26.123321   60573 system_pods.go:86] 6 kube-system pods found
	I0205 03:12:26.123354   60573 system_pods.go:89] "coredns-668d6bf9bc-wtrdd" [8021c0ec-321e-485c-8be5-8c775a1a6bd6] Running
	I0205 03:12:26.123359   60573 system_pods.go:89] "etcd-pause-922984" [fd03e340-5cf6-421b-87f4-df40bd77f11b] Running
	I0205 03:12:26.123363   60573 system_pods.go:89] "kube-apiserver-pause-922984" [57b48e9b-32af-423f-a24f-d9778038297f] Running
	I0205 03:12:26.123367   60573 system_pods.go:89] "kube-controller-manager-pause-922984" [9a2deb1a-46f7-4d41-857c-2d5f8874b507] Running
	I0205 03:12:26.123371   60573 system_pods.go:89] "kube-proxy-dwrtm" [5a97e2a0-0706-4603-8471-b77d9645621a] Running
	I0205 03:12:26.123374   60573 system_pods.go:89] "kube-scheduler-pause-922984" [d4bc9b90-e9f4-4af8-adaf-cfe8d027e9d2] Running
	I0205 03:12:26.123383   60573 system_pods.go:126] duration metric: took 200.976725ms to wait for k8s-apps to be running ...
	I0205 03:12:26.123391   60573 system_svc.go:44] waiting for kubelet service to be running ....
	I0205 03:12:26.123438   60573 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0205 03:12:26.137986   60573 system_svc.go:56] duration metric: took 14.580753ms WaitForService to wait for kubelet
	I0205 03:12:26.138027   60573 kubeadm.go:582] duration metric: took 2.58208043s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0205 03:12:26.138050   60573 node_conditions.go:102] verifying NodePressure condition ...
	I0205 03:12:26.323017   60573 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0205 03:12:26.323043   60573 node_conditions.go:123] node cpu capacity is 2
	I0205 03:12:26.323055   60573 node_conditions.go:105] duration metric: took 184.998131ms to run NodePressure ...
	I0205 03:12:26.323071   60573 start.go:241] waiting for startup goroutines ...
	I0205 03:12:26.323081   60573 start.go:246] waiting for cluster config update ...
	I0205 03:12:26.323091   60573 start.go:255] writing updated cluster config ...
	I0205 03:12:26.323452   60573 ssh_runner.go:195] Run: rm -f paused
	I0205 03:12:26.372942   60573 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0205 03:12:26.375497   60573 out.go:177] * Done! kubectl is now configured to use "pause-922984" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 05 03:12:27 pause-922984 crio[2931]: time="2025-02-05 03:12:27.005393907Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9362153e-0c55-469c-b120-86a497550d3a name=/runtime.v1.RuntimeService/Version
	Feb 05 03:12:27 pause-922984 crio[2931]: time="2025-02-05 03:12:27.006250871Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c3bce4a0-7511-4cfe-9634-924b02c7bd01 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 03:12:27 pause-922984 crio[2931]: time="2025-02-05 03:12:27.006838301Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738725147006814129,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c3bce4a0-7511-4cfe-9634-924b02c7bd01 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 03:12:27 pause-922984 crio[2931]: time="2025-02-05 03:12:27.007479412Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=df0cb69f-d499-4fba-9e93-3230c249f0c4 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:12:27 pause-922984 crio[2931]: time="2025-02-05 03:12:27.007542964Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=df0cb69f-d499-4fba-9e93-3230c249f0c4 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:12:27 pause-922984 crio[2931]: time="2025-02-05 03:12:27.007813542Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db8d434450a5cfc14524cc376aa4b79126a93e9d17e9636e877a52468b58bd77,PodSandboxId:4c7e1e87ced6439d52516692ed679ff7f7111593b6d3f5936086d3aa98e48ba1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1738725130327729708,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dwrtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a97e2a0-0706-4603-8471-b77d9645621a,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60086a7d780e3e08daa6e15b4c6f1ea956f24b051ee07502a89f0b0a786123f1,PodSandboxId:f87380e0e0e408b3b74124142e63e403abe06d98fafe5b8769256019eaebcb5a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1738725130317812537,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-wtrdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8021c0ec-321e-485c-8be5-8c775a1a6bd6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57f43660cc317d03bbc6e9da03acf5d756cb2b52fdd9fb37056777ad8cf021ff,PodSandboxId:bff9d333dac430283985da87e91d13dceabf8a45a0390ab9a602ed27fef9e5da,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1738725125702892301,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08a1e00469
11418f9566e5cbc595a92c,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75b5eaa59adcef8e2d9b77f8b4652dbd141d5356b207fc0439b32d502da9947d,PodSandboxId:47b15fee57f299126533e3910c66bfee0bc47f7057717d630772fb712a5dc2ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1738725125673470983,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
d539d0756919905646bd69b0bfeffc5,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27f8f0eea5d0f5c9c03dca3fb1e7e750817c6c3016c056cd4933497f5c9a1df3,PodSandboxId:769570524c5d0fa0e001e6717b7230c3d5f6b5909ba02d0d8218e7dbe72327db,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1738725125674724884,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d0ba0cbd041357457de
9134c2ddaa5e,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9927c9020d4e0f78713fd5f987dc955646e0c45ee094446f894d94da249bbea8,PodSandboxId:7bbfa4fb20c5187cce7d6e7a4b19519400bd48aec1dddcca0979193771f9c8f0,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1738725125652972873,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da5613037671f10c90f2e79dddf1916e,},Annotations:map[string]string{io.
kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:755358df556dce6f2e48e92d581a1094200b24c9bdc77d425911145cd466d35b,PodSandboxId:bff9d333dac430283985da87e91d13dceabf8a45a0390ab9a602ed27fef9e5da,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1738725109264538595,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08a1e0046911418f9566e5cbc595a92c,},Annotations:map[string]string{io.kubernetes.containe
r.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceebda47a58d2ce8cf7d15ecb18287cddd291719c8c903043598bc9d961b5d9d,PodSandboxId:47b15fee57f299126533e3910c66bfee0bc47f7057717d630772fb712a5dc2ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1738725109218815338,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d539d0756919905646bd69b0bfeffc5,},Annotations:map[string]string{io.kubernetes.
container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f2bc549c414cff2ca0c6a6112bdd1714da74d43a9810061be4cb7dfa1177c9b,PodSandboxId:7bbfa4fb20c5187cce7d6e7a4b19519400bd48aec1dddcca0979193771f9c8f0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1738725109169875433,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da5613037671f10c90f2e79dddf1916e,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.r
estartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58251f22ebcb311651a348e2e8b1c9eaddb5028a454a73ddd868d2d27f6dfdcd,PodSandboxId:4c7e1e87ced6439d52516692ed679ff7f7111593b6d3f5936086d3aa98e48ba1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1738725109098142841,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dwrtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a97e2a0-0706-4603-8471-b77d9645621a,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:394185c489c99169a014f18ddb655f9d0e65f36644a3e0db6cd019393a37bb93,PodSandboxId:769570524c5d0fa0e001e6717b7230c3d5f6b5909ba02d0d8218e7dbe72327db,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1738725109002456479,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d0ba0cbd041357457de9134c2ddaa5e,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b1cdef4f58308907a18141c78517f42a3abcd7b6a56598d3ba5876cdb59c11e,PodSandboxId:14ac55c55b0959908abe3581d16edb85f29e16a7052b1001d73e84312a077503,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1738725097087905475,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-wtrdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8021c0ec-321e-485c-8be5-8c775a1a6bd6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=df0cb69f-d499-4fba-9e93-3230c249f0c4 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:12:27 pause-922984 crio[2931]: time="2025-02-05 03:12:27.025739369Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ed7733a5-f155-43ce-adac-e46c9424d2b3 name=/runtime.v1.RuntimeService/ListPodSandbox
	Feb 05 03:12:27 pause-922984 crio[2931]: time="2025-02-05 03:12:27.025934216Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:f87380e0e0e408b3b74124142e63e403abe06d98fafe5b8769256019eaebcb5a,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-wtrdd,Uid:8021c0ec-321e-485c-8be5-8c775a1a6bd6,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1738725109039039370,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-wtrdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8021c0ec-321e-485c-8be5-8c775a1a6bd6,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-02-05T03:11:15.989115420Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bff9d333dac430283985da87e91d13dceabf8a45a0390ab9a602ed27fef9e5da,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-922984,Uid:08a1e0046911418f9566e5cbc595a92c,Namespace:kube-system,
Attempt:2,},State:SANDBOX_READY,CreatedAt:1738725108820994404,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08a1e0046911418f9566e5cbc595a92c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 08a1e0046911418f9566e5cbc595a92c,kubernetes.io/config.seen: 2025-02-05T03:11:11.273869669Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:47b15fee57f299126533e3910c66bfee0bc47f7057717d630772fb712a5dc2ab,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-922984,Uid:8d539d0756919905646bd69b0bfeffc5,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1738725108803186367,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d539d0756919
905646bd69b0bfeffc5,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 8d539d0756919905646bd69b0bfeffc5,kubernetes.io/config.seen: 2025-02-05T03:11:11.273877834Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7bbfa4fb20c5187cce7d6e7a4b19519400bd48aec1dddcca0979193771f9c8f0,Metadata:&PodSandboxMetadata{Name:etcd-pause-922984,Uid:da5613037671f10c90f2e79dddf1916e,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1738725108778135295,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da5613037671f10c90f2e79dddf1916e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.73:2379,kubernetes.io/config.hash: da5613037671f10c90f2e79dddf1916e,kubernetes.io/config.seen: 2025-02-05T03:11:11.273875494Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{
Id:4c7e1e87ced6439d52516692ed679ff7f7111593b6d3f5936086d3aa98e48ba1,Metadata:&PodSandboxMetadata{Name:kube-proxy-dwrtm,Uid:5a97e2a0-0706-4603-8471-b77d9645621a,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1738725108725969877,Labels:map[string]string{controller-revision-hash: 566d7b9f85,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-dwrtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a97e2a0-0706-4603-8471-b77d9645621a,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-02-05T03:11:15.518669364Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:769570524c5d0fa0e001e6717b7230c3d5f6b5909ba02d0d8218e7dbe72327db,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-922984,Uid:3d0ba0cbd041357457de9134c2ddaa5e,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1738725108702649342,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.k
ubernetes.pod.name: kube-apiserver-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d0ba0cbd041357457de9134c2ddaa5e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.73:8443,kubernetes.io/config.hash: 3d0ba0cbd041357457de9134c2ddaa5e,kubernetes.io/config.seen: 2025-02-05T03:11:11.273876849Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=ed7733a5-f155-43ce-adac-e46c9424d2b3 name=/runtime.v1.RuntimeService/ListPodSandbox
	Feb 05 03:12:27 pause-922984 crio[2931]: time="2025-02-05 03:12:27.026549463Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b6402f42-5324-40cc-8ab3-c9ce5518bf9a name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:12:27 pause-922984 crio[2931]: time="2025-02-05 03:12:27.026603830Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b6402f42-5324-40cc-8ab3-c9ce5518bf9a name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:12:27 pause-922984 crio[2931]: time="2025-02-05 03:12:27.026742200Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db8d434450a5cfc14524cc376aa4b79126a93e9d17e9636e877a52468b58bd77,PodSandboxId:4c7e1e87ced6439d52516692ed679ff7f7111593b6d3f5936086d3aa98e48ba1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1738725130327729708,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dwrtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a97e2a0-0706-4603-8471-b77d9645621a,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60086a7d780e3e08daa6e15b4c6f1ea956f24b051ee07502a89f0b0a786123f1,PodSandboxId:f87380e0e0e408b3b74124142e63e403abe06d98fafe5b8769256019eaebcb5a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1738725130317812537,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-wtrdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8021c0ec-321e-485c-8be5-8c775a1a6bd6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57f43660cc317d03bbc6e9da03acf5d756cb2b52fdd9fb37056777ad8cf021ff,PodSandboxId:bff9d333dac430283985da87e91d13dceabf8a45a0390ab9a602ed27fef9e5da,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1738725125702892301,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08a1e00469
11418f9566e5cbc595a92c,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75b5eaa59adcef8e2d9b77f8b4652dbd141d5356b207fc0439b32d502da9947d,PodSandboxId:47b15fee57f299126533e3910c66bfee0bc47f7057717d630772fb712a5dc2ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1738725125673470983,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
d539d0756919905646bd69b0bfeffc5,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27f8f0eea5d0f5c9c03dca3fb1e7e750817c6c3016c056cd4933497f5c9a1df3,PodSandboxId:769570524c5d0fa0e001e6717b7230c3d5f6b5909ba02d0d8218e7dbe72327db,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1738725125674724884,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d0ba0cbd041357457de
9134c2ddaa5e,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9927c9020d4e0f78713fd5f987dc955646e0c45ee094446f894d94da249bbea8,PodSandboxId:7bbfa4fb20c5187cce7d6e7a4b19519400bd48aec1dddcca0979193771f9c8f0,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1738725125652972873,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da5613037671f10c90f2e79dddf1916e,},Annotations:map[string]string{io.
kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b6402f42-5324-40cc-8ab3-c9ce5518bf9a name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:12:27 pause-922984 crio[2931]: time="2025-02-05 03:12:27.050525534Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=482b540c-3fd0-4752-9a39-ecd7f85cdcf2 name=/runtime.v1.RuntimeService/Version
	Feb 05 03:12:27 pause-922984 crio[2931]: time="2025-02-05 03:12:27.050597409Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=482b540c-3fd0-4752-9a39-ecd7f85cdcf2 name=/runtime.v1.RuntimeService/Version
	Feb 05 03:12:27 pause-922984 crio[2931]: time="2025-02-05 03:12:27.053928110Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=57ff054e-8712-4724-bf9e-6006c683f315 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 03:12:27 pause-922984 crio[2931]: time="2025-02-05 03:12:27.054378975Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738725147054308942,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=57ff054e-8712-4724-bf9e-6006c683f315 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 03:12:27 pause-922984 crio[2931]: time="2025-02-05 03:12:27.054962934Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=55407ce7-6bb0-4011-bbad-e7a763dde3da name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:12:27 pause-922984 crio[2931]: time="2025-02-05 03:12:27.055015541Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=55407ce7-6bb0-4011-bbad-e7a763dde3da name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:12:27 pause-922984 crio[2931]: time="2025-02-05 03:12:27.055289175Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db8d434450a5cfc14524cc376aa4b79126a93e9d17e9636e877a52468b58bd77,PodSandboxId:4c7e1e87ced6439d52516692ed679ff7f7111593b6d3f5936086d3aa98e48ba1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1738725130327729708,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dwrtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a97e2a0-0706-4603-8471-b77d9645621a,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60086a7d780e3e08daa6e15b4c6f1ea956f24b051ee07502a89f0b0a786123f1,PodSandboxId:f87380e0e0e408b3b74124142e63e403abe06d98fafe5b8769256019eaebcb5a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1738725130317812537,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-wtrdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8021c0ec-321e-485c-8be5-8c775a1a6bd6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57f43660cc317d03bbc6e9da03acf5d756cb2b52fdd9fb37056777ad8cf021ff,PodSandboxId:bff9d333dac430283985da87e91d13dceabf8a45a0390ab9a602ed27fef9e5da,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1738725125702892301,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08a1e00469
11418f9566e5cbc595a92c,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75b5eaa59adcef8e2d9b77f8b4652dbd141d5356b207fc0439b32d502da9947d,PodSandboxId:47b15fee57f299126533e3910c66bfee0bc47f7057717d630772fb712a5dc2ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1738725125673470983,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
d539d0756919905646bd69b0bfeffc5,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27f8f0eea5d0f5c9c03dca3fb1e7e750817c6c3016c056cd4933497f5c9a1df3,PodSandboxId:769570524c5d0fa0e001e6717b7230c3d5f6b5909ba02d0d8218e7dbe72327db,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1738725125674724884,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d0ba0cbd041357457de
9134c2ddaa5e,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9927c9020d4e0f78713fd5f987dc955646e0c45ee094446f894d94da249bbea8,PodSandboxId:7bbfa4fb20c5187cce7d6e7a4b19519400bd48aec1dddcca0979193771f9c8f0,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1738725125652972873,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da5613037671f10c90f2e79dddf1916e,},Annotations:map[string]string{io.
kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:755358df556dce6f2e48e92d581a1094200b24c9bdc77d425911145cd466d35b,PodSandboxId:bff9d333dac430283985da87e91d13dceabf8a45a0390ab9a602ed27fef9e5da,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1738725109264538595,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08a1e0046911418f9566e5cbc595a92c,},Annotations:map[string]string{io.kubernetes.containe
r.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceebda47a58d2ce8cf7d15ecb18287cddd291719c8c903043598bc9d961b5d9d,PodSandboxId:47b15fee57f299126533e3910c66bfee0bc47f7057717d630772fb712a5dc2ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1738725109218815338,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d539d0756919905646bd69b0bfeffc5,},Annotations:map[string]string{io.kubernetes.
container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f2bc549c414cff2ca0c6a6112bdd1714da74d43a9810061be4cb7dfa1177c9b,PodSandboxId:7bbfa4fb20c5187cce7d6e7a4b19519400bd48aec1dddcca0979193771f9c8f0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1738725109169875433,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da5613037671f10c90f2e79dddf1916e,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.r
estartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58251f22ebcb311651a348e2e8b1c9eaddb5028a454a73ddd868d2d27f6dfdcd,PodSandboxId:4c7e1e87ced6439d52516692ed679ff7f7111593b6d3f5936086d3aa98e48ba1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1738725109098142841,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dwrtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a97e2a0-0706-4603-8471-b77d9645621a,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:394185c489c99169a014f18ddb655f9d0e65f36644a3e0db6cd019393a37bb93,PodSandboxId:769570524c5d0fa0e001e6717b7230c3d5f6b5909ba02d0d8218e7dbe72327db,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1738725109002456479,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d0ba0cbd041357457de9134c2ddaa5e,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b1cdef4f58308907a18141c78517f42a3abcd7b6a56598d3ba5876cdb59c11e,PodSandboxId:14ac55c55b0959908abe3581d16edb85f29e16a7052b1001d73e84312a077503,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1738725097087905475,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-wtrdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8021c0ec-321e-485c-8be5-8c775a1a6bd6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=55407ce7-6bb0-4011-bbad-e7a763dde3da name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:12:27 pause-922984 crio[2931]: time="2025-02-05 03:12:27.096955406Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bae54dea-5322-449d-97d0-166091b8c03c name=/runtime.v1.RuntimeService/Version
	Feb 05 03:12:27 pause-922984 crio[2931]: time="2025-02-05 03:12:27.097029026Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bae54dea-5322-449d-97d0-166091b8c03c name=/runtime.v1.RuntimeService/Version
	Feb 05 03:12:27 pause-922984 crio[2931]: time="2025-02-05 03:12:27.098510416Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e3e0dc68-aeff-454f-a669-a6be8debcdc9 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 03:12:27 pause-922984 crio[2931]: time="2025-02-05 03:12:27.098886872Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738725147098864671,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e3e0dc68-aeff-454f-a669-a6be8debcdc9 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 03:12:27 pause-922984 crio[2931]: time="2025-02-05 03:12:27.103914664Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c3d3ed8a-c142-4b8e-8de6-525aa5659175 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:12:27 pause-922984 crio[2931]: time="2025-02-05 03:12:27.104095367Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c3d3ed8a-c142-4b8e-8de6-525aa5659175 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:12:27 pause-922984 crio[2931]: time="2025-02-05 03:12:27.104639260Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db8d434450a5cfc14524cc376aa4b79126a93e9d17e9636e877a52468b58bd77,PodSandboxId:4c7e1e87ced6439d52516692ed679ff7f7111593b6d3f5936086d3aa98e48ba1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1738725130327729708,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dwrtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a97e2a0-0706-4603-8471-b77d9645621a,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60086a7d780e3e08daa6e15b4c6f1ea956f24b051ee07502a89f0b0a786123f1,PodSandboxId:f87380e0e0e408b3b74124142e63e403abe06d98fafe5b8769256019eaebcb5a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1738725130317812537,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-wtrdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8021c0ec-321e-485c-8be5-8c775a1a6bd6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57f43660cc317d03bbc6e9da03acf5d756cb2b52fdd9fb37056777ad8cf021ff,PodSandboxId:bff9d333dac430283985da87e91d13dceabf8a45a0390ab9a602ed27fef9e5da,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1738725125702892301,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08a1e00469
11418f9566e5cbc595a92c,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75b5eaa59adcef8e2d9b77f8b4652dbd141d5356b207fc0439b32d502da9947d,PodSandboxId:47b15fee57f299126533e3910c66bfee0bc47f7057717d630772fb712a5dc2ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1738725125673470983,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
d539d0756919905646bd69b0bfeffc5,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27f8f0eea5d0f5c9c03dca3fb1e7e750817c6c3016c056cd4933497f5c9a1df3,PodSandboxId:769570524c5d0fa0e001e6717b7230c3d5f6b5909ba02d0d8218e7dbe72327db,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1738725125674724884,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d0ba0cbd041357457de
9134c2ddaa5e,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9927c9020d4e0f78713fd5f987dc955646e0c45ee094446f894d94da249bbea8,PodSandboxId:7bbfa4fb20c5187cce7d6e7a4b19519400bd48aec1dddcca0979193771f9c8f0,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1738725125652972873,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da5613037671f10c90f2e79dddf1916e,},Annotations:map[string]string{io.
kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:755358df556dce6f2e48e92d581a1094200b24c9bdc77d425911145cd466d35b,PodSandboxId:bff9d333dac430283985da87e91d13dceabf8a45a0390ab9a602ed27fef9e5da,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1738725109264538595,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08a1e0046911418f9566e5cbc595a92c,},Annotations:map[string]string{io.kubernetes.containe
r.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceebda47a58d2ce8cf7d15ecb18287cddd291719c8c903043598bc9d961b5d9d,PodSandboxId:47b15fee57f299126533e3910c66bfee0bc47f7057717d630772fb712a5dc2ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1738725109218815338,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d539d0756919905646bd69b0bfeffc5,},Annotations:map[string]string{io.kubernetes.
container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f2bc549c414cff2ca0c6a6112bdd1714da74d43a9810061be4cb7dfa1177c9b,PodSandboxId:7bbfa4fb20c5187cce7d6e7a4b19519400bd48aec1dddcca0979193771f9c8f0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1738725109169875433,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da5613037671f10c90f2e79dddf1916e,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.r
estartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58251f22ebcb311651a348e2e8b1c9eaddb5028a454a73ddd868d2d27f6dfdcd,PodSandboxId:4c7e1e87ced6439d52516692ed679ff7f7111593b6d3f5936086d3aa98e48ba1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1738725109098142841,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dwrtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a97e2a0-0706-4603-8471-b77d9645621a,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:394185c489c99169a014f18ddb655f9d0e65f36644a3e0db6cd019393a37bb93,PodSandboxId:769570524c5d0fa0e001e6717b7230c3d5f6b5909ba02d0d8218e7dbe72327db,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1738725109002456479,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d0ba0cbd041357457de9134c2ddaa5e,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b1cdef4f58308907a18141c78517f42a3abcd7b6a56598d3ba5876cdb59c11e,PodSandboxId:14ac55c55b0959908abe3581d16edb85f29e16a7052b1001d73e84312a077503,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1738725097087905475,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-wtrdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8021c0ec-321e-485c-8be5-8c775a1a6bd6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c3d3ed8a-c142-4b8e-8de6-525aa5659175 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	db8d434450a5c       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a   16 seconds ago      Running             kube-proxy                3                   4c7e1e87ced64       kube-proxy-dwrtm
	60086a7d780e3       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   16 seconds ago      Running             coredns                   2                   f87380e0e0e40       coredns-668d6bf9bc-wtrdd
	57f43660cc317       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1   21 seconds ago      Running             kube-scheduler            3                   bff9d333dac43       kube-scheduler-pause-922984
	27f8f0eea5d0f       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a   21 seconds ago      Running             kube-apiserver            3                   769570524c5d0       kube-apiserver-pause-922984
	75b5eaa59adce       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35   21 seconds ago      Running             kube-controller-manager   3                   47b15fee57f29       kube-controller-manager-pause-922984
	9927c9020d4e0       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   21 seconds ago      Running             etcd                      3                   7bbfa4fb20c51       etcd-pause-922984
	755358df556dc       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1   37 seconds ago      Exited              kube-scheduler            2                   bff9d333dac43       kube-scheduler-pause-922984
	ceebda47a58d2       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35   37 seconds ago      Exited              kube-controller-manager   2                   47b15fee57f29       kube-controller-manager-pause-922984
	5f2bc549c414c       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   38 seconds ago      Exited              etcd                      2                   7bbfa4fb20c51       etcd-pause-922984
	58251f22ebcb3       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a   38 seconds ago      Exited              kube-proxy                2                   4c7e1e87ced64       kube-proxy-dwrtm
	394185c489c99       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a   38 seconds ago      Exited              kube-apiserver            2                   769570524c5d0       kube-apiserver-pause-922984
	0b1cdef4f5830       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   50 seconds ago      Exited              coredns                   1                   14ac55c55b095       coredns-668d6bf9bc-wtrdd
	
	
	==> coredns [0b1cdef4f58308907a18141c78517f42a3abcd7b6a56598d3ba5876cdb59c11e] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:35859 - 63632 "HINFO IN 7075890073473774007.5113352051000378399. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.03451235s
	
	
	==> coredns [60086a7d780e3e08daa6e15b4c6f1ea956f24b051ee07502a89f0b0a786123f1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:52742 - 21352 "HINFO IN 1416766307856171938.83712086921647280. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.021559449s
	
	
	==> describe nodes <==
	Name:               pause-922984
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-922984
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e14c106b6cd223b80b8f7425af26284a33f75c3d
	                    minikube.k8s.io/name=pause-922984
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_02_05T03_11_12_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 05 Feb 2025 03:11:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-922984
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 05 Feb 2025 03:12:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 05 Feb 2025 03:12:09 +0000   Wed, 05 Feb 2025 03:11:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 05 Feb 2025 03:12:09 +0000   Wed, 05 Feb 2025 03:11:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 05 Feb 2025 03:12:09 +0000   Wed, 05 Feb 2025 03:11:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 05 Feb 2025 03:12:09 +0000   Wed, 05 Feb 2025 03:11:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.73
	  Hostname:    pause-922984
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 5f93e6d9b72644108bc8444d617888a4
	  System UUID:                5f93e6d9-b726-4410-8bc8-444d617888a4
	  Boot ID:                    beb2962c-4dd2-424e-afae-230b83170edc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-wtrdd                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     72s
	  kube-system                 etcd-pause-922984                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         76s
	  kube-system                 kube-apiserver-pause-922984             250m (12%)    0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 kube-controller-manager-pause-922984    200m (10%)    0 (0%)      0 (0%)           0 (0%)         77s
	  kube-system                 kube-proxy-dwrtm                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         72s
	  kube-system                 kube-scheduler-pause-922984             100m (5%)     0 (0%)      0 (0%)           0 (0%)         76s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 70s                kube-proxy       
	  Normal  Starting                 16s                kube-proxy       
	  Normal  Starting                 34s                kube-proxy       
	  Normal  NodeHasSufficientMemory  76s                kubelet          Node pause-922984 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    76s                kubelet          Node pause-922984 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     76s                kubelet          Node pause-922984 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  76s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 76s                kubelet          Starting kubelet.
	  Normal  NodeReady                75s                kubelet          Node pause-922984 status is now: NodeReady
	  Normal  RegisteredNode           73s                node-controller  Node pause-922984 event: Registered Node pause-922984 in Controller
	  Normal  Starting                 23s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)  kubelet          Node pause-922984 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)  kubelet          Node pause-922984 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)  kubelet          Node pause-922984 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           15s                node-controller  Node pause-922984 event: Registered Node pause-922984 in Controller
	
	
	==> dmesg <==
	[  +0.065890] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.051027] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.181765] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.153482] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.268000] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +4.025862] systemd-fstab-generator[736]: Ignoring "noauto" option for root device
	[Feb 5 03:11] systemd-fstab-generator[867]: Ignoring "noauto" option for root device
	[  +0.060374] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.513206] systemd-fstab-generator[1215]: Ignoring "noauto" option for root device
	[  +0.105699] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.227066] kauditd_printk_skb: 31 callbacks suppressed
	[  +0.124217] systemd-fstab-generator[1484]: Ignoring "noauto" option for root device
	[ +11.062865] kauditd_printk_skb: 81 callbacks suppressed
	[  +9.278071] systemd-fstab-generator[2801]: Ignoring "noauto" option for root device
	[  +0.210307] systemd-fstab-generator[2819]: Ignoring "noauto" option for root device
	[  +0.269074] systemd-fstab-generator[2861]: Ignoring "noauto" option for root device
	[  +0.203218] systemd-fstab-generator[2892]: Ignoring "noauto" option for root device
	[  +0.371512] systemd-fstab-generator[2924]: Ignoring "noauto" option for root device
	[ +10.587790] systemd-fstab-generator[3189]: Ignoring "noauto" option for root device
	[  +0.071520] kauditd_printk_skb: 173 callbacks suppressed
	[  +5.491848] kauditd_printk_skb: 89 callbacks suppressed
	[Feb 5 03:12] systemd-fstab-generator[4088]: Ignoring "noauto" option for root device
	[  +5.599000] kauditd_printk_skb: 44 callbacks suppressed
	[ +13.226303] systemd-fstab-generator[4575]: Ignoring "noauto" option for root device
	[  +0.097871] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [5f2bc549c414cff2ca0c6a6112bdd1714da74d43a9810061be4cb7dfa1177c9b] <==
	{"level":"info","ts":"2025-02-05T03:11:51.369446Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c465966f5ecfebb3 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-02-05T03:11:51.369492Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c465966f5ecfebb3 received MsgPreVoteResp from c465966f5ecfebb3 at term 2"}
	{"level":"info","ts":"2025-02-05T03:11:51.369507Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c465966f5ecfebb3 became candidate at term 3"}
	{"level":"info","ts":"2025-02-05T03:11:51.369512Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c465966f5ecfebb3 received MsgVoteResp from c465966f5ecfebb3 at term 3"}
	{"level":"info","ts":"2025-02-05T03:11:51.369520Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c465966f5ecfebb3 became leader at term 3"}
	{"level":"info","ts":"2025-02-05T03:11:51.369526Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c465966f5ecfebb3 elected leader c465966f5ecfebb3 at term 3"}
	{"level":"info","ts":"2025-02-05T03:11:51.372541Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-05T03:11:51.373397Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-05T03:11:51.372493Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"c465966f5ecfebb3","local-member-attributes":"{Name:pause-922984 ClientURLs:[https://192.168.50.73:2379]}","request-path":"/0/members/c465966f5ecfebb3/attributes","cluster-id":"ee292103c115fe9e","publish-timeout":"7s"}
	{"level":"info","ts":"2025-02-05T03:11:51.373688Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-05T03:11:51.373916Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-02-05T03:11:51.373953Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-02-05T03:11:51.374143Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.73:2379"}
	{"level":"info","ts":"2025-02-05T03:11:51.374384Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-05T03:11:51.374930Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-02-05T03:11:52.959931Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-02-05T03:11:52.960024Z","caller":"embed/etcd.go:378","msg":"closing etcd server","name":"pause-922984","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.73:2380"],"advertise-client-urls":["https://192.168.50.73:2379"]}
	{"level":"warn","ts":"2025-02-05T03:11:52.960144Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-02-05T03:11:52.960256Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-02-05T03:11:52.977319Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.73:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-02-05T03:11:52.977457Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.73:2379: use of closed network connection"}
	{"level":"info","ts":"2025-02-05T03:11:52.977585Z","caller":"etcdserver/server.go:1543","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"c465966f5ecfebb3","current-leader-member-id":"c465966f5ecfebb3"}
	{"level":"info","ts":"2025-02-05T03:11:52.984141Z","caller":"embed/etcd.go:582","msg":"stopping serving peer traffic","address":"192.168.50.73:2380"}
	{"level":"info","ts":"2025-02-05T03:11:52.984309Z","caller":"embed/etcd.go:587","msg":"stopped serving peer traffic","address":"192.168.50.73:2380"}
	{"level":"info","ts":"2025-02-05T03:11:52.984453Z","caller":"embed/etcd.go:380","msg":"closed etcd server","name":"pause-922984","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.73:2380"],"advertise-client-urls":["https://192.168.50.73:2379"]}
	
	
	==> etcd [9927c9020d4e0f78713fd5f987dc955646e0c45ee094446f894d94da249bbea8] <==
	{"level":"warn","ts":"2025-02-05T03:12:13.617729Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"641.272165ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/kube-apiserver-legacy-service-account-token-tracking\" limit:1 ","response":"range_response_count:1 size:370"}
	{"level":"info","ts":"2025-02-05T03:12:13.619684Z","caller":"traceutil/trace.go:171","msg":"trace[2095290305] range","detail":"{range_begin:/registry/configmaps/kube-system/kube-apiserver-legacy-service-account-token-tracking; range_end:; response_count:1; response_revision:425; }","duration":"643.244471ms","start":"2025-02-05T03:12:12.976422Z","end":"2025-02-05T03:12:13.619666Z","steps":["trace[2095290305] 'agreement among raft nodes before linearized reading'  (duration: 641.2546ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-05T03:12:13.618016Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"365.237924ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" limit:1 ","response":"range_response_count:1 size:4133"}
	{"level":"info","ts":"2025-02-05T03:12:13.619829Z","caller":"traceutil/trace.go:171","msg":"trace[1530625671] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:425; }","duration":"367.079513ms","start":"2025-02-05T03:12:13.252737Z","end":"2025-02-05T03:12:13.619816Z","steps":["trace[1530625671] 'agreement among raft nodes before linearized reading'  (duration: 365.203423ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-05T03:12:13.619891Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-05T03:12:13.252724Z","time spent":"367.150776ms","remote":"127.0.0.1:39480","response type":"/etcdserverpb.KV/Range","request count":0,"request size":45,"response count":1,"response size":4157,"request content":"key:\"/registry/deployments/kube-system/coredns\" limit:1 "}
	{"level":"warn","ts":"2025-02-05T03:12:13.619856Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-05T03:12:12.976412Z","time spent":"643.381316ms","remote":"127.0.0.1:39154","response type":"/etcdserverpb.KV/Range","request count":0,"request size":89,"response count":1,"response size":394,"request content":"key:\"/registry/configmaps/kube-system/kube-apiserver-legacy-service-account-token-tracking\" limit:1 "}
	{"level":"warn","ts":"2025-02-05T03:12:14.336548Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"327.70388ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16984082258399836864 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/replicasets/kube-system/coredns-668d6bf9bc\" mod_revision:424 > success:<request_put:<key:\"/registry/replicasets/kube-system/coredns-668d6bf9bc\" value_size:3770 >> failure:<request_range:<key:\"/registry/replicasets/kube-system/coredns-668d6bf9bc\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-02-05T03:12:14.336957Z","caller":"traceutil/trace.go:171","msg":"trace[875598015] linearizableReadLoop","detail":"{readStateIndex:467; appliedIndex:465; }","duration":"695.959195ms","start":"2025-02-05T03:12:13.640980Z","end":"2025-02-05T03:12:14.336939Z","steps":["trace[875598015] 'read index received'  (duration: 367.769247ms)","trace[875598015] 'applied index is now lower than readState.Index'  (duration: 328.189432ms)"],"step_count":2}
	{"level":"info","ts":"2025-02-05T03:12:14.337231Z","caller":"traceutil/trace.go:171","msg":"trace[2024005879] transaction","detail":"{read_only:false; response_revision:426; number_of_response:1; }","duration":"698.486237ms","start":"2025-02-05T03:12:13.638733Z","end":"2025-02-05T03:12:14.337219Z","steps":["trace[2024005879] 'process raft request'  (duration: 370.060285ms)","trace[2024005879] 'compare'  (duration: 327.390866ms)"],"step_count":2}
	{"level":"warn","ts":"2025-02-05T03:12:14.338645Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-05T03:12:13.638709Z","time spent":"699.886542ms","remote":"127.0.0.1:39516","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3830,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/replicasets/kube-system/coredns-668d6bf9bc\" mod_revision:424 > success:<request_put:<key:\"/registry/replicasets/kube-system/coredns-668d6bf9bc\" value_size:3770 >> failure:<request_range:<key:\"/registry/replicasets/kube-system/coredns-668d6bf9bc\" > >"}
	{"level":"info","ts":"2025-02-05T03:12:14.337369Z","caller":"traceutil/trace.go:171","msg":"trace[2045710403] transaction","detail":"{read_only:false; response_revision:427; number_of_response:1; }","duration":"698.107859ms","start":"2025-02-05T03:12:13.639216Z","end":"2025-02-05T03:12:14.337324Z","steps":["trace[2045710403] 'process raft request'  (duration: 697.64756ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-05T03:12:14.338834Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-05T03:12:13.639200Z","time spent":"699.582262ms","remote":"127.0.0.1:39480","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4118,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:422 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4069 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >"}
	{"level":"info","ts":"2025-02-05T03:12:14.337393Z","caller":"traceutil/trace.go:171","msg":"trace[1781317986] transaction","detail":"{read_only:false; response_revision:428; number_of_response:1; }","duration":"601.778971ms","start":"2025-02-05T03:12:13.735608Z","end":"2025-02-05T03:12:14.337387Z","steps":["trace[1781317986] 'process raft request'  (duration: 601.308145ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-05T03:12:14.337479Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"696.483953ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-922984\" limit:1 ","response":"range_response_count:1 size:5840"}
	{"level":"info","ts":"2025-02-05T03:12:14.339024Z","caller":"traceutil/trace.go:171","msg":"trace[1223521871] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-922984; range_end:; response_count:1; response_revision:428; }","duration":"698.060131ms","start":"2025-02-05T03:12:13.640950Z","end":"2025-02-05T03:12:14.339010Z","steps":["trace[1223521871] 'agreement among raft nodes before linearized reading'  (duration: 696.452287ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-05T03:12:14.339094Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-05T03:12:13.640937Z","time spent":"698.143829ms","remote":"127.0.0.1:39236","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":1,"response size":5864,"request content":"key:\"/registry/pods/kube-system/etcd-pause-922984\" limit:1 "}
	{"level":"warn","ts":"2025-02-05T03:12:14.339277Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-05T03:12:13.735585Z","time spent":"603.659117ms","remote":"127.0.0.1:39130","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":616,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/default/pause-922984.18213149fd504b3c\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-922984.18213149fd504b3c\" value_size:544 lease:7760710221545061038 >> failure:<>"}
	{"level":"warn","ts":"2025-02-05T03:12:14.845481Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"215.544591ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16984082258399836869 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/pause-922984.1821314a06311815\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-922984.1821314a06311815\" value_size:598 lease:7760710221545061038 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-02-05T03:12:14.845641Z","caller":"traceutil/trace.go:171","msg":"trace[123226104] transaction","detail":"{read_only:false; response_revision:430; number_of_response:1; }","duration":"222.548058ms","start":"2025-02-05T03:12:14.623084Z","end":"2025-02-05T03:12:14.845632Z","steps":["trace[123226104] 'process raft request'  (duration: 222.490147ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-05T03:12:14.845658Z","caller":"traceutil/trace.go:171","msg":"trace[168612774] linearizableReadLoop","detail":"{readStateIndex:469; appliedIndex:468; }","duration":"499.946854ms","start":"2025-02-05T03:12:14.345695Z","end":"2025-02-05T03:12:14.845642Z","steps":["trace[168612774] 'read index received'  (duration: 284.198024ms)","trace[168612774] 'applied index is now lower than readState.Index'  (duration: 215.74757ms)"],"step_count":2}
	{"level":"warn","ts":"2025-02-05T03:12:14.845800Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"500.121758ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-922984\" limit:1 ","response":"range_response_count:1 size:5840"}
	{"level":"info","ts":"2025-02-05T03:12:14.847710Z","caller":"traceutil/trace.go:171","msg":"trace[1421704583] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-922984; range_end:; response_count:1; response_revision:430; }","duration":"502.054392ms","start":"2025-02-05T03:12:14.345645Z","end":"2025-02-05T03:12:14.847700Z","steps":["trace[1421704583] 'agreement among raft nodes before linearized reading'  (duration: 500.087458ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-05T03:12:14.847767Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-05T03:12:14.345634Z","time spent":"502.116232ms","remote":"127.0.0.1:39236","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":1,"response size":5864,"request content":"key:\"/registry/pods/kube-system/etcd-pause-922984\" limit:1 "}
	{"level":"info","ts":"2025-02-05T03:12:14.845984Z","caller":"traceutil/trace.go:171","msg":"trace[930455332] transaction","detail":"{read_only:false; response_revision:429; number_of_response:1; }","duration":"502.397111ms","start":"2025-02-05T03:12:14.343576Z","end":"2025-02-05T03:12:14.845973Z","steps":["trace[930455332] 'process raft request'  (duration: 286.191491ms)","trace[930455332] 'compare'  (duration: 215.426254ms)"],"step_count":2}
	{"level":"warn","ts":"2025-02-05T03:12:14.847937Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-05T03:12:14.343560Z","time spent":"504.347437ms","remote":"127.0.0.1:39130","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":670,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/default/pause-922984.1821314a06311815\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-922984.1821314a06311815\" value_size:598 lease:7760710221545061038 >> failure:<>"}
	
	
	==> kernel <==
	 03:12:27 up 1 min,  0 users,  load average: 1.18, 0.37, 0.13
	Linux pause-922984 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [27f8f0eea5d0f5c9c03dca3fb1e7e750817c6c3016c056cd4933497f5c9a1df3] <==
	I0205 03:12:09.074137       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0205 03:12:09.074272       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0205 03:12:09.074544       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0205 03:12:09.074684       1 shared_informer.go:320] Caches are synced for configmaps
	I0205 03:12:09.074743       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0205 03:12:09.073901       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0205 03:12:09.075616       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0205 03:12:09.077564       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0205 03:12:09.085841       1 aggregator.go:171] initial CRD sync complete...
	I0205 03:12:09.085915       1 autoregister_controller.go:144] Starting autoregister controller
	I0205 03:12:09.086010       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0205 03:12:09.086034       1 cache.go:39] Caches are synced for autoregister controller
	I0205 03:12:09.089748       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0205 03:12:09.090930       1 policy_source.go:240] refreshing policies
	I0205 03:12:09.103066       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0205 03:12:09.144811       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0205 03:12:09.978280       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0205 03:12:10.037928       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0205 03:12:10.813938       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0205 03:12:10.860925       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0205 03:12:10.895234       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0205 03:12:10.902129       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0205 03:12:12.280576       1 controller.go:615] quota admission added evaluator for: endpoints
	I0205 03:12:12.975393       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0205 03:12:12.975797       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-apiserver [394185c489c99169a014f18ddb655f9d0e65f36644a3e0db6cd019393a37bb93] <==
	W0205 03:12:02.272309       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:12:02.382005       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:12:02.387543       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:12:02.392920       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:12:02.405588       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:12:02.425611       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:12:02.428991       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:12:02.434794       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:12:02.434905       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:12:02.544446       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:12:02.546998       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:12:02.563161       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:12:02.588672       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:12:02.647751       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:12:02.653119       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:12:02.719680       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:12:02.719748       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:12:02.764806       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:12:02.785593       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:12:02.789075       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:12:02.809871       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:12:02.831999       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:12:02.961699       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:12:03.071479       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:12:03.078209       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [75b5eaa59adcef8e2d9b77f8b4652dbd141d5356b207fc0439b32d502da9947d] <==
	I0205 03:12:12.274899       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0205 03:12:12.275130       1 shared_informer.go:320] Caches are synced for expand
	I0205 03:12:12.276318       1 shared_informer.go:320] Caches are synced for GC
	I0205 03:12:12.276422       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0205 03:12:12.276745       1 shared_informer.go:320] Caches are synced for cronjob
	I0205 03:12:12.276786       1 shared_informer.go:320] Caches are synced for taint
	I0205 03:12:12.276840       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0205 03:12:12.276927       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-922984"
	I0205 03:12:12.276981       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0205 03:12:12.277419       1 shared_informer.go:320] Caches are synced for PVC protection
	I0205 03:12:12.277606       1 shared_informer.go:320] Caches are synced for HPA
	I0205 03:12:12.282959       1 shared_informer.go:320] Caches are synced for daemon sets
	I0205 03:12:12.287277       1 shared_informer.go:320] Caches are synced for resource quota
	I0205 03:12:12.306439       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0205 03:12:12.308736       1 shared_informer.go:320] Caches are synced for garbage collector
	I0205 03:12:12.309974       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0205 03:12:12.315666       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0205 03:12:12.327419       1 shared_informer.go:320] Caches are synced for job
	I0205 03:12:12.329844       1 shared_informer.go:320] Caches are synced for resource quota
	I0205 03:12:12.331235       1 shared_informer.go:320] Caches are synced for persistent volume
	I0205 03:12:12.337501       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0205 03:12:13.619430       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	I0205 03:12:13.628695       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="1.352208873s"
	I0205 03:12:14.341185       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="712.367052ms"
	I0205 03:12:14.341417       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="165.816µs"
	
	
	==> kube-controller-manager [ceebda47a58d2ce8cf7d15ecb18287cddd291719c8c903043598bc9d961b5d9d] <==
	I0205 03:11:50.359394       1 serving.go:386] Generated self-signed cert in-memory
	I0205 03:11:50.856907       1 controllermanager.go:185] "Starting" version="v1.32.1"
	I0205 03:11:50.856993       1 controllermanager.go:187] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0205 03:11:50.858563       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0205 03:11:50.858763       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0205 03:11:50.858918       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0205 03:11:50.858996       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	
	
	==> kube-proxy [58251f22ebcb311651a348e2e8b1c9eaddb5028a454a73ddd868d2d27f6dfdcd] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0205 03:11:50.777543       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0205 03:11:52.702123       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.50.73"]
	E0205 03:11:52.710119       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0205 03:11:52.755974       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0205 03:11:52.756080       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0205 03:11:52.756108       1 server_linux.go:170] "Using iptables Proxier"
	I0205 03:11:52.766585       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0205 03:11:52.766939       1 server.go:497] "Version info" version="v1.32.1"
	I0205 03:11:52.767170       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0205 03:11:52.768801       1 config.go:199] "Starting service config controller"
	I0205 03:11:52.768906       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0205 03:11:52.769004       1 config.go:105] "Starting endpoint slice config controller"
	I0205 03:11:52.769025       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0205 03:11:52.769937       1 config.go:329] "Starting node config controller"
	I0205 03:11:52.769978       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0205 03:11:52.870744       1 shared_informer.go:320] Caches are synced for node config
	I0205 03:11:52.886209       1 shared_informer.go:320] Caches are synced for service config
	I0205 03:11:52.889215       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [db8d434450a5cfc14524cc376aa4b79126a93e9d17e9636e877a52468b58bd77] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0205 03:12:10.539209       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0205 03:12:10.550004       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.50.73"]
	E0205 03:12:10.550159       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0205 03:12:10.585210       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0205 03:12:10.587553       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0205 03:12:10.587599       1 server_linux.go:170] "Using iptables Proxier"
	I0205 03:12:10.590786       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0205 03:12:10.591184       1 server.go:497] "Version info" version="v1.32.1"
	I0205 03:12:10.591217       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0205 03:12:10.593462       1 config.go:199] "Starting service config controller"
	I0205 03:12:10.593510       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0205 03:12:10.593543       1 config.go:105] "Starting endpoint slice config controller"
	I0205 03:12:10.593565       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0205 03:12:10.594267       1 config.go:329] "Starting node config controller"
	I0205 03:12:10.594303       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0205 03:12:10.694268       1 shared_informer.go:320] Caches are synced for service config
	I0205 03:12:10.694309       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0205 03:12:10.694679       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [57f43660cc317d03bbc6e9da03acf5d756cb2b52fdd9fb37056777ad8cf021ff] <==
	I0205 03:12:07.669848       1 serving.go:386] Generated self-signed cert in-memory
	W0205 03:12:09.034937       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0205 03:12:09.034995       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0205 03:12:09.035008       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0205 03:12:09.035029       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0205 03:12:09.071690       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0205 03:12:09.071831       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0205 03:12:09.083576       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0205 03:12:09.085413       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0205 03:12:09.090375       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0205 03:12:09.085442       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0205 03:12:09.192926       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [755358df556dce6f2e48e92d581a1094200b24c9bdc77d425911145cd466d35b] <==
	I0205 03:11:50.553236       1 serving.go:386] Generated self-signed cert in-memory
	W0205 03:11:52.607517       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0205 03:11:52.607799       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0205 03:11:52.607892       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0205 03:11:52.607930       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0205 03:11:52.702924       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0205 03:11:52.703008       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0205 03:11:52.703078       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I0205 03:11:52.714407       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0205 03:11:52.715832       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0205 03:11:52.715995       1 shared_informer.go:316] "Unhandled Error" err="unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file" logger="UnhandledError"
	I0205 03:11:52.716704       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0205 03:11:52.716809       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0205 03:11:52.717034       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0205 03:11:52.717119       1 configmap_cafile_content.go:213] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0205 03:11:52.717238       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0205 03:11:52.718310       1 server.go:266] "waiting for handlers to sync" err="context canceled"
	E0205 03:11:52.718799       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Feb 05 03:12:08 pause-922984 kubelet[4095]: E0205 03:12:08.155593    4095 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-922984\" not found" node="pause-922984"
	Feb 05 03:12:08 pause-922984 kubelet[4095]: E0205 03:12:08.156203    4095 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-922984\" not found" node="pause-922984"
	Feb 05 03:12:09 pause-922984 kubelet[4095]: I0205 03:12:09.108936    4095 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-922984"
	Feb 05 03:12:09 pause-922984 kubelet[4095]: I0205 03:12:09.136177    4095 kubelet_node_status.go:125] "Node was previously registered" node="pause-922984"
	Feb 05 03:12:09 pause-922984 kubelet[4095]: I0205 03:12:09.136464    4095 kubelet_node_status.go:79] "Successfully registered node" node="pause-922984"
	Feb 05 03:12:09 pause-922984 kubelet[4095]: I0205 03:12:09.136584    4095 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Feb 05 03:12:09 pause-922984 kubelet[4095]: I0205 03:12:09.137849    4095 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Feb 05 03:12:09 pause-922984 kubelet[4095]: E0205 03:12:09.149074    4095 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-pause-922984\" already exists" pod="kube-system/etcd-pause-922984"
	Feb 05 03:12:09 pause-922984 kubelet[4095]: I0205 03:12:09.149314    4095 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-922984"
	Feb 05 03:12:09 pause-922984 kubelet[4095]: E0205 03:12:09.188032    4095 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-922984\" already exists" pod="kube-system/kube-apiserver-pause-922984"
	Feb 05 03:12:09 pause-922984 kubelet[4095]: I0205 03:12:09.188074    4095 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-922984"
	Feb 05 03:12:09 pause-922984 kubelet[4095]: E0205 03:12:09.204288    4095 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-922984\" already exists" pod="kube-system/kube-controller-manager-pause-922984"
	Feb 05 03:12:09 pause-922984 kubelet[4095]: I0205 03:12:09.204317    4095 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-922984"
	Feb 05 03:12:09 pause-922984 kubelet[4095]: E0205 03:12:09.209484    4095 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-922984\" already exists" pod="kube-system/kube-scheduler-pause-922984"
	Feb 05 03:12:09 pause-922984 kubelet[4095]: I0205 03:12:09.990977    4095 apiserver.go:52] "Watching apiserver"
	Feb 05 03:12:10 pause-922984 kubelet[4095]: I0205 03:12:10.002477    4095 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Feb 05 03:12:10 pause-922984 kubelet[4095]: I0205 03:12:10.033040    4095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5a97e2a0-0706-4603-8471-b77d9645621a-xtables-lock\") pod \"kube-proxy-dwrtm\" (UID: \"5a97e2a0-0706-4603-8471-b77d9645621a\") " pod="kube-system/kube-proxy-dwrtm"
	Feb 05 03:12:10 pause-922984 kubelet[4095]: I0205 03:12:10.033152    4095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a97e2a0-0706-4603-8471-b77d9645621a-lib-modules\") pod \"kube-proxy-dwrtm\" (UID: \"5a97e2a0-0706-4603-8471-b77d9645621a\") " pod="kube-system/kube-proxy-dwrtm"
	Feb 05 03:12:10 pause-922984 kubelet[4095]: I0205 03:12:10.296113    4095 scope.go:117] "RemoveContainer" containerID="58251f22ebcb311651a348e2e8b1c9eaddb5028a454a73ddd868d2d27f6dfdcd"
	Feb 05 03:12:10 pause-922984 kubelet[4095]: I0205 03:12:10.296747    4095 scope.go:117] "RemoveContainer" containerID="0b1cdef4f58308907a18141c78517f42a3abcd7b6a56598d3ba5876cdb59c11e"
	Feb 05 03:12:12 pause-922984 kubelet[4095]: I0205 03:12:12.455508    4095 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Feb 05 03:12:15 pause-922984 kubelet[4095]: E0205 03:12:15.157853    4095 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738725135156629787,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 05 03:12:15 pause-922984 kubelet[4095]: E0205 03:12:15.157905    4095 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738725135156629787,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 05 03:12:25 pause-922984 kubelet[4095]: E0205 03:12:25.160523    4095 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738725145159215926,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 05 03:12:25 pause-922984 kubelet[4095]: E0205 03:12:25.161095    4095 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738725145159215926,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-922984 -n pause-922984
helpers_test.go:261: (dbg) Run:  kubectl --context pause-922984 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-922984 -n pause-922984
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-922984 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-922984 logs -n 25: (1.293191145s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p NoKubernetes-290619                | NoKubernetes-290619       | jenkins | v1.35.0 | 05 Feb 25 03:07 UTC | 05 Feb 25 03:07 UTC |
	| start   | -p NoKubernetes-290619                | NoKubernetes-290619       | jenkins | v1.35.0 | 05 Feb 25 03:07 UTC | 05 Feb 25 03:08 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-409141           | force-systemd-env-409141  | jenkins | v1.35.0 | 05 Feb 25 03:07 UTC | 05 Feb 25 03:07 UTC |
	| start   | -p cert-expiration-908105             | cert-expiration-908105    | jenkins | v1.35.0 | 05 Feb 25 03:07 UTC | 05 Feb 25 03:09 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-467430 ssh cat     | force-systemd-flag-467430 | jenkins | v1.35.0 | 05 Feb 25 03:08 UTC | 05 Feb 25 03:08 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-467430          | force-systemd-flag-467430 | jenkins | v1.35.0 | 05 Feb 25 03:08 UTC | 05 Feb 25 03:08 UTC |
	| start   | -p cert-options-653669                | cert-options-653669       | jenkins | v1.35.0 | 05 Feb 25 03:08 UTC | 05 Feb 25 03:09 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-290619 sudo           | NoKubernetes-290619       | jenkins | v1.35.0 | 05 Feb 25 03:08 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-290619                | NoKubernetes-290619       | jenkins | v1.35.0 | 05 Feb 25 03:09 UTC | 05 Feb 25 03:09 UTC |
	| start   | -p NoKubernetes-290619                | NoKubernetes-290619       | jenkins | v1.35.0 | 05 Feb 25 03:09 UTC | 05 Feb 25 03:09 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-292727             | running-upgrade-292727    | jenkins | v1.35.0 | 05 Feb 25 03:09 UTC | 05 Feb 25 03:09 UTC |
	| start   | -p kubernetes-upgrade-024079          | kubernetes-upgrade-024079 | jenkins | v1.35.0 | 05 Feb 25 03:09 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-653669 ssh               | cert-options-653669       | jenkins | v1.35.0 | 05 Feb 25 03:09 UTC | 05 Feb 25 03:09 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-653669 -- sudo        | cert-options-653669       | jenkins | v1.35.0 | 05 Feb 25 03:09 UTC | 05 Feb 25 03:09 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-653669                | cert-options-653669       | jenkins | v1.35.0 | 05 Feb 25 03:09 UTC | 05 Feb 25 03:09 UTC |
	| start   | -p stopped-upgrade-687224             | minikube                  | jenkins | v1.26.0 | 05 Feb 25 03:09 UTC | 05 Feb 25 03:10 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-290619 sudo           | NoKubernetes-290619       | jenkins | v1.35.0 | 05 Feb 25 03:09 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-290619                | NoKubernetes-290619       | jenkins | v1.35.0 | 05 Feb 25 03:09 UTC | 05 Feb 25 03:09 UTC |
	| start   | -p pause-922984 --memory=2048         | pause-922984              | jenkins | v1.35.0 | 05 Feb 25 03:09 UTC | 05 Feb 25 03:11 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-687224 stop           | minikube                  | jenkins | v1.26.0 | 05 Feb 25 03:10 UTC | 05 Feb 25 03:10 UTC |
	| start   | -p stopped-upgrade-687224             | stopped-upgrade-687224    | jenkins | v1.35.0 | 05 Feb 25 03:10 UTC | 05 Feb 25 03:11 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-922984                       | pause-922984              | jenkins | v1.35.0 | 05 Feb 25 03:11 UTC | 05 Feb 25 03:12 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-687224             | stopped-upgrade-687224    | jenkins | v1.35.0 | 05 Feb 25 03:11 UTC | 05 Feb 25 03:11 UTC |
	| start   | -p old-k8s-version-191773             | old-k8s-version-191773    | jenkins | v1.35.0 | 05 Feb 25 03:11 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --kvm-network=default                 |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system         |                           |         |         |                     |                     |
	|         | --disable-driver-mounts               |                           |         |         |                     |                     |
	|         | --keep-context=false                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	| start   | -p cert-expiration-908105             | cert-expiration-908105    | jenkins | v1.35.0 | 05 Feb 25 03:12 UTC |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/05 03:12:15
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0205 03:12:15.642511   61086 out.go:345] Setting OutFile to fd 1 ...
	I0205 03:12:15.642648   61086 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 03:12:15.642651   61086 out.go:358] Setting ErrFile to fd 2...
	I0205 03:12:15.642655   61086 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 03:12:15.642843   61086 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20363-12788/.minikube/bin
	I0205 03:12:15.643452   61086 out.go:352] Setting JSON to false
	I0205 03:12:15.644562   61086 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6887,"bootTime":1738718249,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0205 03:12:15.644618   61086 start.go:139] virtualization: kvm guest
	I0205 03:12:15.647171   61086 out.go:177] * [cert-expiration-908105] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0205 03:12:15.648499   61086 notify.go:220] Checking for updates...
	I0205 03:12:15.648506   61086 out.go:177]   - MINIKUBE_LOCATION=20363
	I0205 03:12:15.649713   61086 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0205 03:12:15.651047   61086 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20363-12788/kubeconfig
	I0205 03:12:15.652372   61086 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20363-12788/.minikube
	I0205 03:12:15.653543   61086 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0205 03:12:15.654791   61086 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0205 03:12:14.044494   60782 out.go:235]   - Generating certificates and keys ...
	I0205 03:12:14.044638   60782 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0205 03:12:14.044747   60782 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0205 03:12:14.044894   60782 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0205 03:12:14.162900   60782 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0205 03:12:14.401545   60782 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0205 03:12:14.816155   60782 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0205 03:12:15.071858   60782 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0205 03:12:15.072071   60782 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-191773] and IPs [192.168.39.74 127.0.0.1 ::1]
	I0205 03:12:15.223663   60782 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0205 03:12:15.224178   60782 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-191773] and IPs [192.168.39.74 127.0.0.1 ::1]
	I0205 03:12:15.611954   60782 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0205 03:12:15.656305   61086 config.go:182] Loaded profile config "cert-expiration-908105": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 03:12:15.656691   61086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:12:15.656732   61086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:12:15.672278   61086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37903
	I0205 03:12:15.672741   61086 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:12:15.673311   61086 main.go:141] libmachine: Using API Version  1
	I0205 03:12:15.673331   61086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:12:15.673690   61086 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:12:15.673871   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .DriverName
	I0205 03:12:15.674078   61086 driver.go:394] Setting default libvirt URI to qemu:///system
	I0205 03:12:15.674366   61086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:12:15.674409   61086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:12:15.689228   61086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42527
	I0205 03:12:15.689674   61086 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:12:15.690164   61086 main.go:141] libmachine: Using API Version  1
	I0205 03:12:15.690176   61086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:12:15.690504   61086 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:12:15.690662   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .DriverName
	I0205 03:12:15.729325   61086 out.go:177] * Using the kvm2 driver based on existing profile
	I0205 03:12:15.730629   61086 start.go:297] selected driver: kvm2
	I0205 03:12:15.730639   61086 start.go:901] validating driver "kvm2" against &{Name:cert-expiration-908105 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:cert-expiration-
908105 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.120 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0205 03:12:15.730770   61086 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0205 03:12:15.731714   61086 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0205 03:12:15.731788   61086 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20363-12788/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0205 03:12:15.747773   61086 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0205 03:12:15.748245   61086 cni.go:84] Creating CNI manager for ""
	I0205 03:12:15.748287   61086 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0205 03:12:15.748336   61086 start.go:340] cluster config:
	{Name:cert-expiration-908105 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:cert-expiration-908105 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.120 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0205 03:12:15.748441   61086 iso.go:125] acquiring lock: {Name:mk486603e8d6546d81b9e7d0a893261360630790 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0205 03:12:15.750663   61086 out.go:177] * Starting "cert-expiration-908105" primary control-plane node in "cert-expiration-908105" cluster
	I0205 03:12:15.751836   61086 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0205 03:12:15.751870   61086 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20363-12788/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0205 03:12:15.751876   61086 cache.go:56] Caching tarball of preloaded images
	I0205 03:12:15.751941   61086 preload.go:172] Found /home/jenkins/minikube-integration/20363-12788/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0205 03:12:15.751947   61086 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0205 03:12:15.752023   61086 profile.go:143] Saving config to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/config.json ...
	I0205 03:12:15.752228   61086 start.go:360] acquireMachinesLock for cert-expiration-908105: {Name:mka859d8706e94e04a549fcebb98cdac86bfe5a9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0205 03:12:15.752268   61086 start.go:364] duration metric: took 28.182µs to acquireMachinesLock for "cert-expiration-908105"
	I0205 03:12:15.752278   61086 start.go:96] Skipping create...Using existing machine configuration
	I0205 03:12:15.752281   61086 fix.go:54] fixHost starting: 
	I0205 03:12:15.752548   61086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:12:15.752577   61086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:12:15.767886   61086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36873
	I0205 03:12:15.768247   61086 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:12:15.768673   61086 main.go:141] libmachine: Using API Version  1
	I0205 03:12:15.768689   61086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:12:15.768981   61086 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:12:15.769175   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .DriverName
	I0205 03:12:15.769314   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetState
	I0205 03:12:15.770827   61086 fix.go:112] recreateIfNeeded on cert-expiration-908105: state=Running err=<nil>
	W0205 03:12:15.770838   61086 fix.go:138] unexpected machine state, will restart: <nil>
	I0205 03:12:15.772753   61086 out.go:177] * Updating the running kvm2 "cert-expiration-908105" VM ...
	I0205 03:12:15.730142   60782 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0205 03:12:15.805509   60782 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0205 03:12:15.805952   60782 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0205 03:12:15.893294   60782 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0205 03:12:15.998614   60782 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0205 03:12:16.213085   60782 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0205 03:12:16.406193   60782 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0205 03:12:16.426712   60782 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0205 03:12:16.428908   60782 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0205 03:12:16.429005   60782 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0205 03:12:16.595620   60782 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0205 03:12:15.284936   58877 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0205 03:12:15.285232   58877 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0205 03:12:15.285256   58877 kubeadm.go:310] 
	I0205 03:12:15.285312   58877 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0205 03:12:15.285414   58877 kubeadm.go:310] 		timed out waiting for the condition
	I0205 03:12:15.285448   58877 kubeadm.go:310] 
	I0205 03:12:15.285506   58877 kubeadm.go:310] 	This error is likely caused by:
	I0205 03:12:15.285551   58877 kubeadm.go:310] 		- The kubelet is not running
	I0205 03:12:15.285699   58877 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0205 03:12:15.285733   58877 kubeadm.go:310] 
	I0205 03:12:15.285873   58877 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0205 03:12:15.285922   58877 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0205 03:12:15.285965   58877 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0205 03:12:15.285974   58877 kubeadm.go:310] 
	I0205 03:12:15.286094   58877 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0205 03:12:15.286194   58877 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0205 03:12:15.286206   58877 kubeadm.go:310] 
	I0205 03:12:15.286334   58877 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0205 03:12:15.286453   58877 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0205 03:12:15.286551   58877 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0205 03:12:15.286639   58877 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0205 03:12:15.286651   58877 kubeadm.go:310] 
	I0205 03:12:15.286812   58877 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0205 03:12:15.286920   58877 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0205 03:12:15.287004   58877 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0205 03:12:15.287144   58877 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-024079 localhost] and IPs [192.168.61.227 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-024079 localhost] and IPs [192.168.61.227 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0205 03:12:15.287193   58877 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0205 03:12:16.366363   58877 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.079134587s)
	I0205 03:12:16.366450   58877 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0205 03:12:16.380456   58877 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0205 03:12:16.390034   58877 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0205 03:12:16.390060   58877 kubeadm.go:157] found existing configuration files:
	
	I0205 03:12:16.390106   58877 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0205 03:12:16.398997   58877 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0205 03:12:16.399054   58877 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0205 03:12:16.409431   58877 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0205 03:12:16.420730   58877 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0205 03:12:16.420798   58877 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0205 03:12:16.432957   58877 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0205 03:12:16.441938   58877 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0205 03:12:16.442017   58877 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0205 03:12:16.452609   58877 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0205 03:12:16.463862   58877 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0205 03:12:16.463948   58877 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0205 03:12:16.476971   58877 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0205 03:12:16.555608   58877 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0205 03:12:16.555761   58877 kubeadm.go:310] [preflight] Running pre-flight checks
	I0205 03:12:16.735232   58877 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0205 03:12:16.735377   58877 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0205 03:12:16.735482   58877 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0205 03:12:16.930863   58877 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0205 03:12:16.932632   58877 out.go:235]   - Generating certificates and keys ...
	I0205 03:12:16.932734   58877 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0205 03:12:16.932818   58877 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0205 03:12:16.932950   58877 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0205 03:12:16.933040   58877 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0205 03:12:16.933137   58877 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0205 03:12:16.933235   58877 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0205 03:12:16.933372   58877 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0205 03:12:16.933474   58877 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0205 03:12:16.933589   58877 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0205 03:12:16.933705   58877 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0205 03:12:16.933767   58877 kubeadm.go:310] [certs] Using the existing "sa" key
	I0205 03:12:16.933848   58877 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0205 03:12:16.992914   58877 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0205 03:12:17.050399   58877 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0205 03:12:17.166345   58877 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0205 03:12:17.371653   58877 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0205 03:12:17.385934   58877 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0205 03:12:17.387054   58877 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0205 03:12:17.387133   58877 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0205 03:12:17.524773   58877 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0205 03:12:15.040615   60573 pod_ready.go:103] pod "etcd-pause-922984" in "kube-system" namespace has status "Ready":"False"
	I0205 03:12:17.018319   60573 pod_ready.go:93] pod "etcd-pause-922984" in "kube-system" namespace has status "Ready":"True"
	I0205 03:12:17.018349   60573 pod_ready.go:82] duration metric: took 4.009957995s for pod "etcd-pause-922984" in "kube-system" namespace to be "Ready" ...
	I0205 03:12:17.018364   60573 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-pause-922984" in "kube-system" namespace to be "Ready" ...
	I0205 03:12:19.023649   60573 pod_ready.go:103] pod "kube-apiserver-pause-922984" in "kube-system" namespace has status "Ready":"False"
	I0205 03:12:17.526688   58877 out.go:235]   - Booting up control plane ...
	I0205 03:12:17.526814   58877 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0205 03:12:17.529207   58877 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0205 03:12:17.531542   58877 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0205 03:12:17.532560   58877 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0205 03:12:17.538131   58877 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0205 03:12:15.773860   61086 machine.go:93] provisionDockerMachine start ...
	I0205 03:12:15.773873   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .DriverName
	I0205 03:12:15.774044   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHHostname
	I0205 03:12:15.776762   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | domain cert-expiration-908105 has defined MAC address 52:54:00:a1:67:e2 in network mk-cert-expiration-908105
	I0205 03:12:15.777174   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:67:e2", ip: ""} in network mk-cert-expiration-908105: {Iface:virbr1 ExpiryTime:2025-02-05 04:08:47 +0000 UTC Type:0 Mac:52:54:00:a1:67:e2 Iaid: IPaddr:192.168.72.120 Prefix:24 Hostname:cert-expiration-908105 Clientid:01:52:54:00:a1:67:e2}
	I0205 03:12:15.777205   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | domain cert-expiration-908105 has defined IP address 192.168.72.120 and MAC address 52:54:00:a1:67:e2 in network mk-cert-expiration-908105
	I0205 03:12:15.777362   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHPort
	I0205 03:12:15.777527   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHKeyPath
	I0205 03:12:15.777681   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHKeyPath
	I0205 03:12:15.777799   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHUsername
	I0205 03:12:15.777975   61086 main.go:141] libmachine: Using SSH client type: native
	I0205 03:12:15.778139   61086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.120 22 <nil> <nil>}
	I0205 03:12:15.778144   61086 main.go:141] libmachine: About to run SSH command:
	hostname
	I0205 03:12:15.882368   61086 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-908105
	
	I0205 03:12:15.882387   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetMachineName
	I0205 03:12:15.882628   61086 buildroot.go:166] provisioning hostname "cert-expiration-908105"
	I0205 03:12:15.882647   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetMachineName
	I0205 03:12:15.882789   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHHostname
	I0205 03:12:15.885728   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | domain cert-expiration-908105 has defined MAC address 52:54:00:a1:67:e2 in network mk-cert-expiration-908105
	I0205 03:12:15.886074   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:67:e2", ip: ""} in network mk-cert-expiration-908105: {Iface:virbr1 ExpiryTime:2025-02-05 04:08:47 +0000 UTC Type:0 Mac:52:54:00:a1:67:e2 Iaid: IPaddr:192.168.72.120 Prefix:24 Hostname:cert-expiration-908105 Clientid:01:52:54:00:a1:67:e2}
	I0205 03:12:15.886096   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | domain cert-expiration-908105 has defined IP address 192.168.72.120 and MAC address 52:54:00:a1:67:e2 in network mk-cert-expiration-908105
	I0205 03:12:15.886261   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHPort
	I0205 03:12:15.886438   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHKeyPath
	I0205 03:12:15.886637   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHKeyPath
	I0205 03:12:15.886773   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHUsername
	I0205 03:12:15.886938   61086 main.go:141] libmachine: Using SSH client type: native
	I0205 03:12:15.887089   61086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.120 22 <nil> <nil>}
	I0205 03:12:15.887095   61086 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-908105 && echo "cert-expiration-908105" | sudo tee /etc/hostname
	I0205 03:12:16.004802   61086 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-908105
	
	I0205 03:12:16.004821   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHHostname
	I0205 03:12:16.007856   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | domain cert-expiration-908105 has defined MAC address 52:54:00:a1:67:e2 in network mk-cert-expiration-908105
	I0205 03:12:16.008268   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:67:e2", ip: ""} in network mk-cert-expiration-908105: {Iface:virbr1 ExpiryTime:2025-02-05 04:08:47 +0000 UTC Type:0 Mac:52:54:00:a1:67:e2 Iaid: IPaddr:192.168.72.120 Prefix:24 Hostname:cert-expiration-908105 Clientid:01:52:54:00:a1:67:e2}
	I0205 03:12:16.008312   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | domain cert-expiration-908105 has defined IP address 192.168.72.120 and MAC address 52:54:00:a1:67:e2 in network mk-cert-expiration-908105
	I0205 03:12:16.008498   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHPort
	I0205 03:12:16.008708   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHKeyPath
	I0205 03:12:16.008903   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHKeyPath
	I0205 03:12:16.009072   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHUsername
	I0205 03:12:16.009267   61086 main.go:141] libmachine: Using SSH client type: native
	I0205 03:12:16.009501   61086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.120 22 <nil> <nil>}
	I0205 03:12:16.009520   61086 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-908105' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-908105/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-908105' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0205 03:12:16.118224   61086 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0205 03:12:16.118243   61086 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20363-12788/.minikube CaCertPath:/home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20363-12788/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20363-12788/.minikube}
	I0205 03:12:16.118291   61086 buildroot.go:174] setting up certificates
	I0205 03:12:16.118300   61086 provision.go:84] configureAuth start
	I0205 03:12:16.118311   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetMachineName
	I0205 03:12:16.118588   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetIP
	I0205 03:12:16.121597   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | domain cert-expiration-908105 has defined MAC address 52:54:00:a1:67:e2 in network mk-cert-expiration-908105
	I0205 03:12:16.121946   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:67:e2", ip: ""} in network mk-cert-expiration-908105: {Iface:virbr1 ExpiryTime:2025-02-05 04:08:47 +0000 UTC Type:0 Mac:52:54:00:a1:67:e2 Iaid: IPaddr:192.168.72.120 Prefix:24 Hostname:cert-expiration-908105 Clientid:01:52:54:00:a1:67:e2}
	I0205 03:12:16.121982   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | domain cert-expiration-908105 has defined IP address 192.168.72.120 and MAC address 52:54:00:a1:67:e2 in network mk-cert-expiration-908105
	I0205 03:12:16.122118   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHHostname
	I0205 03:12:16.124413   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | domain cert-expiration-908105 has defined MAC address 52:54:00:a1:67:e2 in network mk-cert-expiration-908105
	I0205 03:12:16.124791   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:67:e2", ip: ""} in network mk-cert-expiration-908105: {Iface:virbr1 ExpiryTime:2025-02-05 04:08:47 +0000 UTC Type:0 Mac:52:54:00:a1:67:e2 Iaid: IPaddr:192.168.72.120 Prefix:24 Hostname:cert-expiration-908105 Clientid:01:52:54:00:a1:67:e2}
	I0205 03:12:16.124813   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | domain cert-expiration-908105 has defined IP address 192.168.72.120 and MAC address 52:54:00:a1:67:e2 in network mk-cert-expiration-908105
	I0205 03:12:16.124942   61086 provision.go:143] copyHostCerts
	I0205 03:12:16.125000   61086 exec_runner.go:144] found /home/jenkins/minikube-integration/20363-12788/.minikube/cert.pem, removing ...
	I0205 03:12:16.125008   61086 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20363-12788/.minikube/cert.pem
	I0205 03:12:16.125060   61086 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20363-12788/.minikube/cert.pem (1123 bytes)
	I0205 03:12:16.125135   61086 exec_runner.go:144] found /home/jenkins/minikube-integration/20363-12788/.minikube/key.pem, removing ...
	I0205 03:12:16.125138   61086 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20363-12788/.minikube/key.pem
	I0205 03:12:16.125160   61086 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20363-12788/.minikube/key.pem (1675 bytes)
	I0205 03:12:16.125206   61086 exec_runner.go:144] found /home/jenkins/minikube-integration/20363-12788/.minikube/ca.pem, removing ...
	I0205 03:12:16.125209   61086 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20363-12788/.minikube/ca.pem
	I0205 03:12:16.125225   61086 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20363-12788/.minikube/ca.pem (1082 bytes)
	I0205 03:12:16.125266   61086 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20363-12788/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-908105 san=[127.0.0.1 192.168.72.120 cert-expiration-908105 localhost minikube]
	I0205 03:12:16.283216   61086 provision.go:177] copyRemoteCerts
	I0205 03:12:16.283259   61086 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0205 03:12:16.283278   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHHostname
	I0205 03:12:16.286164   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | domain cert-expiration-908105 has defined MAC address 52:54:00:a1:67:e2 in network mk-cert-expiration-908105
	I0205 03:12:16.286504   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:67:e2", ip: ""} in network mk-cert-expiration-908105: {Iface:virbr1 ExpiryTime:2025-02-05 04:08:47 +0000 UTC Type:0 Mac:52:54:00:a1:67:e2 Iaid: IPaddr:192.168.72.120 Prefix:24 Hostname:cert-expiration-908105 Clientid:01:52:54:00:a1:67:e2}
	I0205 03:12:16.286527   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | domain cert-expiration-908105 has defined IP address 192.168.72.120 and MAC address 52:54:00:a1:67:e2 in network mk-cert-expiration-908105
	I0205 03:12:16.286678   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHPort
	I0205 03:12:16.286868   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHKeyPath
	I0205 03:12:16.286998   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHUsername
	I0205 03:12:16.287109   61086 sshutil.go:53] new ssh client: &{IP:192.168.72.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/cert-expiration-908105/id_rsa Username:docker}
	I0205 03:12:16.370983   61086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0205 03:12:16.398537   61086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0205 03:12:16.426606   61086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0205 03:12:16.456607   61086 provision.go:87] duration metric: took 338.295013ms to configureAuth
	I0205 03:12:16.456626   61086 buildroot.go:189] setting minikube options for container-runtime
	I0205 03:12:16.456834   61086 config.go:182] Loaded profile config "cert-expiration-908105": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 03:12:16.456923   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHHostname
	I0205 03:12:16.460461   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | domain cert-expiration-908105 has defined MAC address 52:54:00:a1:67:e2 in network mk-cert-expiration-908105
	I0205 03:12:16.460895   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:67:e2", ip: ""} in network mk-cert-expiration-908105: {Iface:virbr1 ExpiryTime:2025-02-05 04:08:47 +0000 UTC Type:0 Mac:52:54:00:a1:67:e2 Iaid: IPaddr:192.168.72.120 Prefix:24 Hostname:cert-expiration-908105 Clientid:01:52:54:00:a1:67:e2}
	I0205 03:12:16.460921   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | domain cert-expiration-908105 has defined IP address 192.168.72.120 and MAC address 52:54:00:a1:67:e2 in network mk-cert-expiration-908105
	I0205 03:12:16.461144   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHPort
	I0205 03:12:16.461360   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHKeyPath
	I0205 03:12:16.461515   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHKeyPath
	I0205 03:12:16.461616   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHUsername
	I0205 03:12:16.461733   61086 main.go:141] libmachine: Using SSH client type: native
	I0205 03:12:16.461901   61086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.120 22 <nil> <nil>}
	I0205 03:12:16.461909   61086 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0205 03:12:16.597537   60782 out.go:235]   - Booting up control plane ...
	I0205 03:12:16.597694   60782 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0205 03:12:16.605308   60782 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0205 03:12:16.606721   60782 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0205 03:12:16.609285   60782 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0205 03:12:16.613978   60782 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0205 03:12:21.026158   60573 pod_ready.go:103] pod "kube-apiserver-pause-922984" in "kube-system" namespace has status "Ready":"False"
	I0205 03:12:21.524745   60573 pod_ready.go:93] pod "kube-apiserver-pause-922984" in "kube-system" namespace has status "Ready":"True"
	I0205 03:12:21.524773   60573 pod_ready.go:82] duration metric: took 4.506399667s for pod "kube-apiserver-pause-922984" in "kube-system" namespace to be "Ready" ...
	I0205 03:12:21.524788   60573 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-pause-922984" in "kube-system" namespace to be "Ready" ...
	I0205 03:12:21.529193   60573 pod_ready.go:93] pod "kube-controller-manager-pause-922984" in "kube-system" namespace has status "Ready":"True"
	I0205 03:12:21.529218   60573 pod_ready.go:82] duration metric: took 4.422836ms for pod "kube-controller-manager-pause-922984" in "kube-system" namespace to be "Ready" ...
	I0205 03:12:21.529227   60573 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-dwrtm" in "kube-system" namespace to be "Ready" ...
	I0205 03:12:21.532919   60573 pod_ready.go:93] pod "kube-proxy-dwrtm" in "kube-system" namespace has status "Ready":"True"
	I0205 03:12:21.532936   60573 pod_ready.go:82] duration metric: took 3.703892ms for pod "kube-proxy-dwrtm" in "kube-system" namespace to be "Ready" ...
	I0205 03:12:21.532944   60573 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-pause-922984" in "kube-system" namespace to be "Ready" ...
	I0205 03:12:23.542321   60573 pod_ready.go:93] pod "kube-scheduler-pause-922984" in "kube-system" namespace has status "Ready":"True"
	I0205 03:12:23.542345   60573 pod_ready.go:82] duration metric: took 2.009394124s for pod "kube-scheduler-pause-922984" in "kube-system" namespace to be "Ready" ...
	I0205 03:12:23.542358   60573 pod_ready.go:39] duration metric: took 12.559300813s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0205 03:12:23.542375   60573 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0205 03:12:23.554545   60573 ops.go:34] apiserver oom_adj: -16
	I0205 03:12:23.554568   60573 kubeadm.go:597] duration metric: took 33.441182227s to restartPrimaryControlPlane
	I0205 03:12:23.554577   60573 kubeadm.go:394] duration metric: took 33.624847083s to StartCluster
	I0205 03:12:23.554594   60573 settings.go:142] acquiring lock: {Name:mk2eca847da5ba78f5b041a83e5cfcbdebb0c621 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:12:23.554669   60573 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20363-12788/kubeconfig
	I0205 03:12:23.555668   60573 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/kubeconfig: {Name:mkb405c4292c681fd728af1af9684132d7e45754 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:12:23.555912   60573 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.73 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0205 03:12:23.556048   60573 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0205 03:12:23.556180   60573 config.go:182] Loaded profile config "pause-922984": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 03:12:23.557404   60573 out.go:177] * Verifying Kubernetes components...
	I0205 03:12:23.557406   60573 out.go:177] * Enabled addons: 
	I0205 03:12:22.066812   61086 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0205 03:12:22.066829   61086 machine.go:96] duration metric: took 6.292961729s to provisionDockerMachine
	I0205 03:12:22.066842   61086 start.go:293] postStartSetup for "cert-expiration-908105" (driver="kvm2")
	I0205 03:12:22.066853   61086 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0205 03:12:22.066869   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .DriverName
	I0205 03:12:22.067295   61086 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0205 03:12:22.067322   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHHostname
	I0205 03:12:22.070707   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | domain cert-expiration-908105 has defined MAC address 52:54:00:a1:67:e2 in network mk-cert-expiration-908105
	I0205 03:12:22.071190   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:67:e2", ip: ""} in network mk-cert-expiration-908105: {Iface:virbr1 ExpiryTime:2025-02-05 04:08:47 +0000 UTC Type:0 Mac:52:54:00:a1:67:e2 Iaid: IPaddr:192.168.72.120 Prefix:24 Hostname:cert-expiration-908105 Clientid:01:52:54:00:a1:67:e2}
	I0205 03:12:22.071222   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | domain cert-expiration-908105 has defined IP address 192.168.72.120 and MAC address 52:54:00:a1:67:e2 in network mk-cert-expiration-908105
	I0205 03:12:22.071425   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHPort
	I0205 03:12:22.071613   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHKeyPath
	I0205 03:12:22.071801   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHUsername
	I0205 03:12:22.071933   61086 sshutil.go:53] new ssh client: &{IP:192.168.72.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/cert-expiration-908105/id_rsa Username:docker}
	I0205 03:12:22.151890   61086 ssh_runner.go:195] Run: cat /etc/os-release
	I0205 03:12:22.156159   61086 info.go:137] Remote host: Buildroot 2023.02.9
	I0205 03:12:22.156177   61086 filesync.go:126] Scanning /home/jenkins/minikube-integration/20363-12788/.minikube/addons for local assets ...
	I0205 03:12:22.156239   61086 filesync.go:126] Scanning /home/jenkins/minikube-integration/20363-12788/.minikube/files for local assets ...
	I0205 03:12:22.156301   61086 filesync.go:149] local asset: /home/jenkins/minikube-integration/20363-12788/.minikube/files/etc/ssl/certs/199892.pem -> 199892.pem in /etc/ssl/certs
	I0205 03:12:22.156376   61086 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0205 03:12:22.166632   61086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/files/etc/ssl/certs/199892.pem --> /etc/ssl/certs/199892.pem (1708 bytes)
	I0205 03:12:22.190167   61086 start.go:296] duration metric: took 123.314106ms for postStartSetup
	I0205 03:12:22.190215   61086 fix.go:56] duration metric: took 6.437916565s for fixHost
	I0205 03:12:22.190231   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHHostname
	I0205 03:12:22.192818   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | domain cert-expiration-908105 has defined MAC address 52:54:00:a1:67:e2 in network mk-cert-expiration-908105
	I0205 03:12:22.193137   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:67:e2", ip: ""} in network mk-cert-expiration-908105: {Iface:virbr1 ExpiryTime:2025-02-05 04:08:47 +0000 UTC Type:0 Mac:52:54:00:a1:67:e2 Iaid: IPaddr:192.168.72.120 Prefix:24 Hostname:cert-expiration-908105 Clientid:01:52:54:00:a1:67:e2}
	I0205 03:12:22.193150   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | domain cert-expiration-908105 has defined IP address 192.168.72.120 and MAC address 52:54:00:a1:67:e2 in network mk-cert-expiration-908105
	I0205 03:12:22.193321   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHPort
	I0205 03:12:22.193553   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHKeyPath
	I0205 03:12:22.193686   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHKeyPath
	I0205 03:12:22.193787   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHUsername
	I0205 03:12:22.193874   61086 main.go:141] libmachine: Using SSH client type: native
	I0205 03:12:22.194031   61086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.120 22 <nil> <nil>}
	I0205 03:12:22.194035   61086 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0205 03:12:22.293942   61086 main.go:141] libmachine: SSH cmd err, output: <nil>: 1738725142.287651267
	
	I0205 03:12:22.293956   61086 fix.go:216] guest clock: 1738725142.287651267
	I0205 03:12:22.293964   61086 fix.go:229] Guest: 2025-02-05 03:12:22.287651267 +0000 UTC Remote: 2025-02-05 03:12:22.190217298 +0000 UTC m=+6.590301571 (delta=97.433969ms)
	I0205 03:12:22.293990   61086 fix.go:200] guest clock delta is within tolerance: 97.433969ms
	I0205 03:12:22.293995   61086 start.go:83] releasing machines lock for "cert-expiration-908105", held for 6.541722078s
	I0205 03:12:22.294019   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .DriverName
	I0205 03:12:22.294319   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetIP
	I0205 03:12:22.296797   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | domain cert-expiration-908105 has defined MAC address 52:54:00:a1:67:e2 in network mk-cert-expiration-908105
	I0205 03:12:22.297093   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:67:e2", ip: ""} in network mk-cert-expiration-908105: {Iface:virbr1 ExpiryTime:2025-02-05 04:08:47 +0000 UTC Type:0 Mac:52:54:00:a1:67:e2 Iaid: IPaddr:192.168.72.120 Prefix:24 Hostname:cert-expiration-908105 Clientid:01:52:54:00:a1:67:e2}
	I0205 03:12:22.297116   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | domain cert-expiration-908105 has defined IP address 192.168.72.120 and MAC address 52:54:00:a1:67:e2 in network mk-cert-expiration-908105
	I0205 03:12:22.297195   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .DriverName
	I0205 03:12:22.297742   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .DriverName
	I0205 03:12:22.297920   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .DriverName
	I0205 03:12:22.298014   61086 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0205 03:12:22.298054   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHHostname
	I0205 03:12:22.298112   61086 ssh_runner.go:195] Run: cat /version.json
	I0205 03:12:22.298128   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHHostname
	I0205 03:12:22.300589   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | domain cert-expiration-908105 has defined MAC address 52:54:00:a1:67:e2 in network mk-cert-expiration-908105
	I0205 03:12:22.300925   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:67:e2", ip: ""} in network mk-cert-expiration-908105: {Iface:virbr1 ExpiryTime:2025-02-05 04:08:47 +0000 UTC Type:0 Mac:52:54:00:a1:67:e2 Iaid: IPaddr:192.168.72.120 Prefix:24 Hostname:cert-expiration-908105 Clientid:01:52:54:00:a1:67:e2}
	I0205 03:12:22.300989   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | domain cert-expiration-908105 has defined IP address 192.168.72.120 and MAC address 52:54:00:a1:67:e2 in network mk-cert-expiration-908105
	I0205 03:12:22.301006   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | domain cert-expiration-908105 has defined MAC address 52:54:00:a1:67:e2 in network mk-cert-expiration-908105
	I0205 03:12:22.301083   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHPort
	I0205 03:12:22.301233   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHKeyPath
	I0205 03:12:22.301398   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHUsername
	I0205 03:12:22.301435   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:67:e2", ip: ""} in network mk-cert-expiration-908105: {Iface:virbr1 ExpiryTime:2025-02-05 04:08:47 +0000 UTC Type:0 Mac:52:54:00:a1:67:e2 Iaid: IPaddr:192.168.72.120 Prefix:24 Hostname:cert-expiration-908105 Clientid:01:52:54:00:a1:67:e2}
	I0205 03:12:22.301449   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | domain cert-expiration-908105 has defined IP address 192.168.72.120 and MAC address 52:54:00:a1:67:e2 in network mk-cert-expiration-908105
	I0205 03:12:22.301515   61086 sshutil.go:53] new ssh client: &{IP:192.168.72.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/cert-expiration-908105/id_rsa Username:docker}
	I0205 03:12:22.301630   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHPort
	I0205 03:12:22.301755   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHKeyPath
	I0205 03:12:22.301879   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetSSHUsername
	I0205 03:12:22.301984   61086 sshutil.go:53] new ssh client: &{IP:192.168.72.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/cert-expiration-908105/id_rsa Username:docker}
	I0205 03:12:22.394716   61086 ssh_runner.go:195] Run: systemctl --version
	I0205 03:12:22.400355   61086 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0205 03:12:22.557683   61086 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0205 03:12:22.563537   61086 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0205 03:12:22.563603   61086 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0205 03:12:22.572849   61086 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0205 03:12:22.572861   61086 start.go:495] detecting cgroup driver to use...
	I0205 03:12:22.572928   61086 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0205 03:12:22.589051   61086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0205 03:12:22.602859   61086 docker.go:217] disabling cri-docker service (if available) ...
	I0205 03:12:22.602905   61086 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0205 03:12:22.616940   61086 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0205 03:12:22.630908   61086 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0205 03:12:22.767095   61086 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0205 03:12:22.896881   61086 docker.go:233] disabling docker service ...
	I0205 03:12:22.896933   61086 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0205 03:12:22.914125   61086 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0205 03:12:22.928198   61086 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0205 03:12:23.071383   61086 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0205 03:12:23.216997   61086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0205 03:12:23.230933   61086 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0205 03:12:23.249243   61086 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0205 03:12:23.249289   61086 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:12:23.259366   61086 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0205 03:12:23.259433   61086 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:12:23.271169   61086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:12:23.282778   61086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:12:23.293270   61086 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0205 03:12:23.303778   61086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:12:23.313936   61086 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:12:23.324281   61086 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:12:23.334099   61086 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0205 03:12:23.343253   61086 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0205 03:12:23.352797   61086 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0205 03:12:23.504162   61086 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0205 03:12:23.732871   61086 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0205 03:12:23.732932   61086 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0205 03:12:23.737776   61086 start.go:563] Will wait 60s for crictl version
	I0205 03:12:23.737825   61086 ssh_runner.go:195] Run: which crictl
	I0205 03:12:23.743163   61086 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0205 03:12:23.794191   61086 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0205 03:12:23.794276   61086 ssh_runner.go:195] Run: crio --version
	I0205 03:12:23.821765   61086 ssh_runner.go:195] Run: crio --version
	I0205 03:12:23.850532   61086 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0205 03:12:23.559113   60573 addons.go:514] duration metric: took 3.071416ms for enable addons: enabled=[]
	I0205 03:12:23.559150   60573 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0205 03:12:23.747384   60573 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0205 03:12:23.765521   60573 node_ready.go:35] waiting up to 6m0s for node "pause-922984" to be "Ready" ...
	I0205 03:12:23.768751   60573 node_ready.go:49] node "pause-922984" has status "Ready":"True"
	I0205 03:12:23.768772   60573 node_ready.go:38] duration metric: took 3.214431ms for node "pause-922984" to be "Ready" ...
	I0205 03:12:23.768781   60573 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0205 03:12:23.771858   60573 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-wtrdd" in "kube-system" namespace to be "Ready" ...
	I0205 03:12:23.776541   60573 pod_ready.go:93] pod "coredns-668d6bf9bc-wtrdd" in "kube-system" namespace has status "Ready":"True"
	I0205 03:12:23.776564   60573 pod_ready.go:82] duration metric: took 4.678344ms for pod "coredns-668d6bf9bc-wtrdd" in "kube-system" namespace to be "Ready" ...
	I0205 03:12:23.776577   60573 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-922984" in "kube-system" namespace to be "Ready" ...
	I0205 03:12:23.922639   60573 pod_ready.go:93] pod "etcd-pause-922984" in "kube-system" namespace has status "Ready":"True"
	I0205 03:12:23.922675   60573 pod_ready.go:82] duration metric: took 146.090176ms for pod "etcd-pause-922984" in "kube-system" namespace to be "Ready" ...
	I0205 03:12:23.922690   60573 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-922984" in "kube-system" namespace to be "Ready" ...
	I0205 03:12:23.851848   61086 main.go:141] libmachine: (cert-expiration-908105) Calling .GetIP
	I0205 03:12:23.854665   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | domain cert-expiration-908105 has defined MAC address 52:54:00:a1:67:e2 in network mk-cert-expiration-908105
	I0205 03:12:23.854983   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:67:e2", ip: ""} in network mk-cert-expiration-908105: {Iface:virbr1 ExpiryTime:2025-02-05 04:08:47 +0000 UTC Type:0 Mac:52:54:00:a1:67:e2 Iaid: IPaddr:192.168.72.120 Prefix:24 Hostname:cert-expiration-908105 Clientid:01:52:54:00:a1:67:e2}
	I0205 03:12:23.855020   61086 main.go:141] libmachine: (cert-expiration-908105) DBG | domain cert-expiration-908105 has defined IP address 192.168.72.120 and MAC address 52:54:00:a1:67:e2 in network mk-cert-expiration-908105
	I0205 03:12:23.855258   61086 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0205 03:12:23.859585   61086 kubeadm.go:883] updating cluster {Name:cert-expiration-908105 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:cert-expiration-908105 Namespac
e:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.120 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0205 03:12:23.859678   61086 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0205 03:12:23.859717   61086 ssh_runner.go:195] Run: sudo crictl images --output json
	I0205 03:12:23.907016   61086 crio.go:514] all images are preloaded for cri-o runtime.
	I0205 03:12:23.907025   61086 crio.go:433] Images already preloaded, skipping extraction
	I0205 03:12:23.907069   61086 ssh_runner.go:195] Run: sudo crictl images --output json
	I0205 03:12:23.940204   61086 crio.go:514] all images are preloaded for cri-o runtime.
	I0205 03:12:23.940215   61086 cache_images.go:84] Images are preloaded, skipping loading
	I0205 03:12:23.940220   61086 kubeadm.go:934] updating node { 192.168.72.120 8443 v1.32.1 crio true true} ...
	I0205 03:12:23.940342   61086 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=cert-expiration-908105 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.120
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:cert-expiration-908105 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0205 03:12:23.940402   61086 ssh_runner.go:195] Run: crio config
	I0205 03:12:23.992494   61086 cni.go:84] Creating CNI manager for ""
	I0205 03:12:23.992510   61086 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0205 03:12:23.992516   61086 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0205 03:12:23.992533   61086 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.120 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-908105 NodeName:cert-expiration-908105 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.120"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.120 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0205 03:12:23.992639   61086 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.120
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-908105"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.120"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.120"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0205 03:12:23.992690   61086 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0205 03:12:24.002851   61086 binaries.go:44] Found k8s binaries, skipping transfer
	I0205 03:12:24.002904   61086 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0205 03:12:24.012431   61086 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0205 03:12:24.029395   61086 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0205 03:12:24.045896   61086 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2302 bytes)
	I0205 03:12:24.062039   61086 ssh_runner.go:195] Run: grep 192.168.72.120	control-plane.minikube.internal$ /etc/hosts
	I0205 03:12:24.065885   61086 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0205 03:12:24.203022   61086 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0205 03:12:24.245195   61086 certs.go:68] Setting up /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105 for IP: 192.168.72.120
	I0205 03:12:24.245221   61086 certs.go:194] generating shared ca certs ...
	I0205 03:12:24.245240   61086 certs.go:226] acquiring lock for ca certs: {Name:mkca3cbb562bd1b517883c3b07fce8acc6cc9038 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:12:24.245457   61086 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20363-12788/.minikube/ca.key
	I0205 03:12:24.245528   61086 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20363-12788/.minikube/proxy-client-ca.key
	I0205 03:12:24.245538   61086 certs.go:256] generating profile certs ...
	W0205 03:12:24.245703   61086 out.go:270] ! Certificate client.crt has expired. Generating a new one...
	I0205 03:12:24.245729   61086 certs.go:624] cert expired /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/client.crt: expiration: 2025-02-05 03:12:01 +0000 UTC, now: 2025-02-05 03:12:24.245722276 +0000 UTC m=+8.645806550
	I0205 03:12:24.245857   61086 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/client.key
	I0205 03:12:24.245891   61086 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/client.crt with IP's: []
	I0205 03:12:24.476576   61086 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/client.crt ...
	I0205 03:12:24.476593   61086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/client.crt: {Name:mk250fda3344694083c629713ce2867a46dc930f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:12:24.476734   61086 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/client.key ...
	I0205 03:12:24.476741   61086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/client.key: {Name:mkc2aeee635b17e08281429f1208655e82615088 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0205 03:12:24.476887   61086 out.go:270] ! Certificate apiserver.crt.019bc120 has expired. Generating a new one...
	I0205 03:12:24.476905   61086 certs.go:624] cert expired /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/apiserver.crt.019bc120: expiration: 2025-02-05 03:12:01 +0000 UTC, now: 2025-02-05 03:12:24.476900105 +0000 UTC m=+8.876984378
	I0205 03:12:24.476971   61086 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/apiserver.key.019bc120
	I0205 03:12:24.476982   61086 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/apiserver.crt.019bc120 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.120]
	I0205 03:12:24.720526   61086 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/apiserver.crt.019bc120 ...
	I0205 03:12:24.720545   61086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/apiserver.crt.019bc120: {Name:mke2cc689fbdbb37bc1c811ef4041565b8fc4f28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:12:24.720699   61086 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/apiserver.key.019bc120 ...
	I0205 03:12:24.720709   61086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/apiserver.key.019bc120: {Name:mk7a71f9525dc776b581de8e7f122be2f8cd9881 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:12:24.720789   61086 certs.go:381] copying /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/apiserver.crt.019bc120 -> /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/apiserver.crt
	I0205 03:12:24.720955   61086 certs.go:385] copying /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/apiserver.key.019bc120 -> /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/apiserver.key
	W0205 03:12:24.721193   61086 out.go:270] ! Certificate proxy-client.crt has expired. Generating a new one...
	I0205 03:12:24.721215   61086 certs.go:624] cert expired /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/proxy-client.crt: expiration: 2025-02-05 03:12:02 +0000 UTC, now: 2025-02-05 03:12:24.721209953 +0000 UTC m=+9.121294227
	I0205 03:12:24.721291   61086 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/proxy-client.key
	I0205 03:12:24.721309   61086 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/proxy-client.crt with IP's: []
	I0205 03:12:24.903018   61086 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/proxy-client.crt ...
	I0205 03:12:24.903032   61086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/proxy-client.crt: {Name:mkd28c763065f1c0605dffd527607148c50a5ba6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:12:24.903176   61086 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/proxy-client.key ...
	I0205 03:12:24.903185   61086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/proxy-client.key: {Name:mk0f46e4fc691e53e4ec8eb3c97d09feab30b1bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:12:24.903357   61086 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/19989.pem (1338 bytes)
	W0205 03:12:24.903388   61086 certs.go:480] ignoring /home/jenkins/minikube-integration/20363-12788/.minikube/certs/19989_empty.pem, impossibly tiny 0 bytes
	I0205 03:12:24.903394   61086 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca-key.pem (1679 bytes)
	I0205 03:12:24.903416   61086 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem (1082 bytes)
	I0205 03:12:24.903439   61086 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/cert.pem (1123 bytes)
	I0205 03:12:24.903455   61086 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/key.pem (1675 bytes)
	I0205 03:12:24.903486   61086 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/files/etc/ssl/certs/199892.pem (1708 bytes)
	I0205 03:12:24.904053   61086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0205 03:12:25.025071   61086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0205 03:12:25.087638   61086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0205 03:12:25.129554   61086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0205 03:12:25.207315   61086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0205 03:12:25.283196   61086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0205 03:12:25.338405   61086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0205 03:12:25.395551   61086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/cert-expiration-908105/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0205 03:12:25.458211   61086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/certs/19989.pem --> /usr/share/ca-certificates/19989.pem (1338 bytes)
	I0205 03:12:25.487739   61086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/files/etc/ssl/certs/199892.pem --> /usr/share/ca-certificates/199892.pem (1708 bytes)
	I0205 03:12:25.515142   61086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0205 03:12:25.554561   61086 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0205 03:12:25.577287   61086 ssh_runner.go:195] Run: openssl version
	I0205 03:12:25.585367   61086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/199892.pem && ln -fs /usr/share/ca-certificates/199892.pem /etc/ssl/certs/199892.pem"
	I0205 03:12:25.597083   61086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/199892.pem
	I0205 03:12:25.601152   61086 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb  5 02:11 /usr/share/ca-certificates/199892.pem
	I0205 03:12:25.601191   61086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/199892.pem
	I0205 03:12:25.608352   61086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/199892.pem /etc/ssl/certs/3ec20f2e.0"
	I0205 03:12:25.619316   61086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0205 03:12:25.633547   61086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0205 03:12:25.637830   61086 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb  5 02:04 /usr/share/ca-certificates/minikubeCA.pem
	I0205 03:12:25.637865   61086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0205 03:12:24.322413   60573 pod_ready.go:93] pod "kube-apiserver-pause-922984" in "kube-system" namespace has status "Ready":"True"
	I0205 03:12:24.322443   60573 pod_ready.go:82] duration metric: took 399.744838ms for pod "kube-apiserver-pause-922984" in "kube-system" namespace to be "Ready" ...
	I0205 03:12:24.322458   60573 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-922984" in "kube-system" namespace to be "Ready" ...
	I0205 03:12:24.723236   60573 pod_ready.go:93] pod "kube-controller-manager-pause-922984" in "kube-system" namespace has status "Ready":"True"
	I0205 03:12:24.723260   60573 pod_ready.go:82] duration metric: took 400.793029ms for pod "kube-controller-manager-pause-922984" in "kube-system" namespace to be "Ready" ...
	I0205 03:12:24.723272   60573 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dwrtm" in "kube-system" namespace to be "Ready" ...
	I0205 03:12:25.122458   60573 pod_ready.go:93] pod "kube-proxy-dwrtm" in "kube-system" namespace has status "Ready":"True"
	I0205 03:12:25.122486   60573 pod_ready.go:82] duration metric: took 399.207926ms for pod "kube-proxy-dwrtm" in "kube-system" namespace to be "Ready" ...
	I0205 03:12:25.122496   60573 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-922984" in "kube-system" namespace to be "Ready" ...
	I0205 03:12:25.521769   60573 pod_ready.go:93] pod "kube-scheduler-pause-922984" in "kube-system" namespace has status "Ready":"True"
	I0205 03:12:25.521792   60573 pod_ready.go:82] duration metric: took 399.289951ms for pod "kube-scheduler-pause-922984" in "kube-system" namespace to be "Ready" ...
	I0205 03:12:25.521801   60573 pod_ready.go:39] duration metric: took 1.753009653s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0205 03:12:25.521816   60573 api_server.go:52] waiting for apiserver process to appear ...
	I0205 03:12:25.521865   60573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:12:25.542213   60573 api_server.go:72] duration metric: took 1.986261842s to wait for apiserver process to appear ...
	I0205 03:12:25.542241   60573 api_server.go:88] waiting for apiserver healthz status ...
	I0205 03:12:25.542258   60573 api_server.go:253] Checking apiserver healthz at https://192.168.50.73:8443/healthz ...
	I0205 03:12:25.547820   60573 api_server.go:279] https://192.168.50.73:8443/healthz returned 200:
	ok
	I0205 03:12:25.549035   60573 api_server.go:141] control plane version: v1.32.1
	I0205 03:12:25.549056   60573 api_server.go:131] duration metric: took 6.807972ms to wait for apiserver health ...
	I0205 03:12:25.549066   60573 system_pods.go:43] waiting for kube-system pods to appear ...
	I0205 03:12:25.723471   60573 system_pods.go:59] 6 kube-system pods found
	I0205 03:12:25.723502   60573 system_pods.go:61] "coredns-668d6bf9bc-wtrdd" [8021c0ec-321e-485c-8be5-8c775a1a6bd6] Running
	I0205 03:12:25.723508   60573 system_pods.go:61] "etcd-pause-922984" [fd03e340-5cf6-421b-87f4-df40bd77f11b] Running
	I0205 03:12:25.723512   60573 system_pods.go:61] "kube-apiserver-pause-922984" [57b48e9b-32af-423f-a24f-d9778038297f] Running
	I0205 03:12:25.723516   60573 system_pods.go:61] "kube-controller-manager-pause-922984" [9a2deb1a-46f7-4d41-857c-2d5f8874b507] Running
	I0205 03:12:25.723519   60573 system_pods.go:61] "kube-proxy-dwrtm" [5a97e2a0-0706-4603-8471-b77d9645621a] Running
	I0205 03:12:25.723523   60573 system_pods.go:61] "kube-scheduler-pause-922984" [d4bc9b90-e9f4-4af8-adaf-cfe8d027e9d2] Running
	I0205 03:12:25.723529   60573 system_pods.go:74] duration metric: took 174.457454ms to wait for pod list to return data ...
	I0205 03:12:25.723543   60573 default_sa.go:34] waiting for default service account to be created ...
	I0205 03:12:25.922362   60573 default_sa.go:45] found service account: "default"
	I0205 03:12:25.922388   60573 default_sa.go:55] duration metric: took 198.839271ms for default service account to be created ...
	I0205 03:12:25.922399   60573 system_pods.go:116] waiting for k8s-apps to be running ...
	I0205 03:12:26.123321   60573 system_pods.go:86] 6 kube-system pods found
	I0205 03:12:26.123354   60573 system_pods.go:89] "coredns-668d6bf9bc-wtrdd" [8021c0ec-321e-485c-8be5-8c775a1a6bd6] Running
	I0205 03:12:26.123359   60573 system_pods.go:89] "etcd-pause-922984" [fd03e340-5cf6-421b-87f4-df40bd77f11b] Running
	I0205 03:12:26.123363   60573 system_pods.go:89] "kube-apiserver-pause-922984" [57b48e9b-32af-423f-a24f-d9778038297f] Running
	I0205 03:12:26.123367   60573 system_pods.go:89] "kube-controller-manager-pause-922984" [9a2deb1a-46f7-4d41-857c-2d5f8874b507] Running
	I0205 03:12:26.123371   60573 system_pods.go:89] "kube-proxy-dwrtm" [5a97e2a0-0706-4603-8471-b77d9645621a] Running
	I0205 03:12:26.123374   60573 system_pods.go:89] "kube-scheduler-pause-922984" [d4bc9b90-e9f4-4af8-adaf-cfe8d027e9d2] Running
	I0205 03:12:26.123383   60573 system_pods.go:126] duration metric: took 200.976725ms to wait for k8s-apps to be running ...
	I0205 03:12:26.123391   60573 system_svc.go:44] waiting for kubelet service to be running ....
	I0205 03:12:26.123438   60573 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0205 03:12:26.137986   60573 system_svc.go:56] duration metric: took 14.580753ms WaitForService to wait for kubelet
	I0205 03:12:26.138027   60573 kubeadm.go:582] duration metric: took 2.58208043s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0205 03:12:26.138050   60573 node_conditions.go:102] verifying NodePressure condition ...
	I0205 03:12:26.323017   60573 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0205 03:12:26.323043   60573 node_conditions.go:123] node cpu capacity is 2
	I0205 03:12:26.323055   60573 node_conditions.go:105] duration metric: took 184.998131ms to run NodePressure ...
	I0205 03:12:26.323071   60573 start.go:241] waiting for startup goroutines ...
	I0205 03:12:26.323081   60573 start.go:246] waiting for cluster config update ...
	I0205 03:12:26.323091   60573 start.go:255] writing updated cluster config ...
	I0205 03:12:26.323452   60573 ssh_runner.go:195] Run: rm -f paused
	I0205 03:12:26.372942   60573 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0205 03:12:26.375497   60573 out.go:177] * Done! kubectl is now configured to use "pause-922984" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 05 03:12:28 pause-922984 crio[2931]: time="2025-02-05 03:12:28.806306197Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738725148806283892,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a58de954-4f5b-4422-85c1-749dfd12de1e name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 03:12:28 pause-922984 crio[2931]: time="2025-02-05 03:12:28.806992330Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=950d7283-3726-4467-9b8c-e8ecb5be5e14 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:12:28 pause-922984 crio[2931]: time="2025-02-05 03:12:28.807043738Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=950d7283-3726-4467-9b8c-e8ecb5be5e14 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:12:28 pause-922984 crio[2931]: time="2025-02-05 03:12:28.807311473Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db8d434450a5cfc14524cc376aa4b79126a93e9d17e9636e877a52468b58bd77,PodSandboxId:4c7e1e87ced6439d52516692ed679ff7f7111593b6d3f5936086d3aa98e48ba1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1738725130327729708,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dwrtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a97e2a0-0706-4603-8471-b77d9645621a,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60086a7d780e3e08daa6e15b4c6f1ea956f24b051ee07502a89f0b0a786123f1,PodSandboxId:f87380e0e0e408b3b74124142e63e403abe06d98fafe5b8769256019eaebcb5a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1738725130317812537,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-wtrdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8021c0ec-321e-485c-8be5-8c775a1a6bd6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57f43660cc317d03bbc6e9da03acf5d756cb2b52fdd9fb37056777ad8cf021ff,PodSandboxId:bff9d333dac430283985da87e91d13dceabf8a45a0390ab9a602ed27fef9e5da,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1738725125702892301,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08a1e00469
11418f9566e5cbc595a92c,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75b5eaa59adcef8e2d9b77f8b4652dbd141d5356b207fc0439b32d502da9947d,PodSandboxId:47b15fee57f299126533e3910c66bfee0bc47f7057717d630772fb712a5dc2ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1738725125673470983,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
d539d0756919905646bd69b0bfeffc5,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27f8f0eea5d0f5c9c03dca3fb1e7e750817c6c3016c056cd4933497f5c9a1df3,PodSandboxId:769570524c5d0fa0e001e6717b7230c3d5f6b5909ba02d0d8218e7dbe72327db,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1738725125674724884,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d0ba0cbd041357457de
9134c2ddaa5e,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9927c9020d4e0f78713fd5f987dc955646e0c45ee094446f894d94da249bbea8,PodSandboxId:7bbfa4fb20c5187cce7d6e7a4b19519400bd48aec1dddcca0979193771f9c8f0,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1738725125652972873,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da5613037671f10c90f2e79dddf1916e,},Annotations:map[string]string{io.
kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:755358df556dce6f2e48e92d581a1094200b24c9bdc77d425911145cd466d35b,PodSandboxId:bff9d333dac430283985da87e91d13dceabf8a45a0390ab9a602ed27fef9e5da,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1738725109264538595,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08a1e0046911418f9566e5cbc595a92c,},Annotations:map[string]string{io.kubernetes.containe
r.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceebda47a58d2ce8cf7d15ecb18287cddd291719c8c903043598bc9d961b5d9d,PodSandboxId:47b15fee57f299126533e3910c66bfee0bc47f7057717d630772fb712a5dc2ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1738725109218815338,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d539d0756919905646bd69b0bfeffc5,},Annotations:map[string]string{io.kubernetes.
container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f2bc549c414cff2ca0c6a6112bdd1714da74d43a9810061be4cb7dfa1177c9b,PodSandboxId:7bbfa4fb20c5187cce7d6e7a4b19519400bd48aec1dddcca0979193771f9c8f0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1738725109169875433,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da5613037671f10c90f2e79dddf1916e,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.r
estartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58251f22ebcb311651a348e2e8b1c9eaddb5028a454a73ddd868d2d27f6dfdcd,PodSandboxId:4c7e1e87ced6439d52516692ed679ff7f7111593b6d3f5936086d3aa98e48ba1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1738725109098142841,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dwrtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a97e2a0-0706-4603-8471-b77d9645621a,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:394185c489c99169a014f18ddb655f9d0e65f36644a3e0db6cd019393a37bb93,PodSandboxId:769570524c5d0fa0e001e6717b7230c3d5f6b5909ba02d0d8218e7dbe72327db,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1738725109002456479,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d0ba0cbd041357457de9134c2ddaa5e,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b1cdef4f58308907a18141c78517f42a3abcd7b6a56598d3ba5876cdb59c11e,PodSandboxId:14ac55c55b0959908abe3581d16edb85f29e16a7052b1001d73e84312a077503,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1738725097087905475,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-wtrdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8021c0ec-321e-485c-8be5-8c775a1a6bd6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=950d7283-3726-4467-9b8c-e8ecb5be5e14 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:12:28 pause-922984 crio[2931]: time="2025-02-05 03:12:28.852447361Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=45a2218f-192f-4dfb-b6c4-8fb8a9a32020 name=/runtime.v1.RuntimeService/Version
	Feb 05 03:12:28 pause-922984 crio[2931]: time="2025-02-05 03:12:28.852520449Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=45a2218f-192f-4dfb-b6c4-8fb8a9a32020 name=/runtime.v1.RuntimeService/Version
	Feb 05 03:12:28 pause-922984 crio[2931]: time="2025-02-05 03:12:28.853498267Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=073bb33a-20b4-4fe7-b84e-c49338f5534c name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 03:12:28 pause-922984 crio[2931]: time="2025-02-05 03:12:28.853886437Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738725148853863896,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=073bb33a-20b4-4fe7-b84e-c49338f5534c name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 03:12:28 pause-922984 crio[2931]: time="2025-02-05 03:12:28.854379064Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8f5b5f6b-5728-4317-a133-437786c2a329 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:12:28 pause-922984 crio[2931]: time="2025-02-05 03:12:28.854432343Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8f5b5f6b-5728-4317-a133-437786c2a329 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:12:28 pause-922984 crio[2931]: time="2025-02-05 03:12:28.854660745Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db8d434450a5cfc14524cc376aa4b79126a93e9d17e9636e877a52468b58bd77,PodSandboxId:4c7e1e87ced6439d52516692ed679ff7f7111593b6d3f5936086d3aa98e48ba1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1738725130327729708,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dwrtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a97e2a0-0706-4603-8471-b77d9645621a,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60086a7d780e3e08daa6e15b4c6f1ea956f24b051ee07502a89f0b0a786123f1,PodSandboxId:f87380e0e0e408b3b74124142e63e403abe06d98fafe5b8769256019eaebcb5a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1738725130317812537,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-wtrdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8021c0ec-321e-485c-8be5-8c775a1a6bd6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57f43660cc317d03bbc6e9da03acf5d756cb2b52fdd9fb37056777ad8cf021ff,PodSandboxId:bff9d333dac430283985da87e91d13dceabf8a45a0390ab9a602ed27fef9e5da,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1738725125702892301,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08a1e00469
11418f9566e5cbc595a92c,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75b5eaa59adcef8e2d9b77f8b4652dbd141d5356b207fc0439b32d502da9947d,PodSandboxId:47b15fee57f299126533e3910c66bfee0bc47f7057717d630772fb712a5dc2ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1738725125673470983,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
d539d0756919905646bd69b0bfeffc5,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27f8f0eea5d0f5c9c03dca3fb1e7e750817c6c3016c056cd4933497f5c9a1df3,PodSandboxId:769570524c5d0fa0e001e6717b7230c3d5f6b5909ba02d0d8218e7dbe72327db,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1738725125674724884,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d0ba0cbd041357457de
9134c2ddaa5e,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9927c9020d4e0f78713fd5f987dc955646e0c45ee094446f894d94da249bbea8,PodSandboxId:7bbfa4fb20c5187cce7d6e7a4b19519400bd48aec1dddcca0979193771f9c8f0,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1738725125652972873,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da5613037671f10c90f2e79dddf1916e,},Annotations:map[string]string{io.
kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:755358df556dce6f2e48e92d581a1094200b24c9bdc77d425911145cd466d35b,PodSandboxId:bff9d333dac430283985da87e91d13dceabf8a45a0390ab9a602ed27fef9e5da,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1738725109264538595,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08a1e0046911418f9566e5cbc595a92c,},Annotations:map[string]string{io.kubernetes.containe
r.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceebda47a58d2ce8cf7d15ecb18287cddd291719c8c903043598bc9d961b5d9d,PodSandboxId:47b15fee57f299126533e3910c66bfee0bc47f7057717d630772fb712a5dc2ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1738725109218815338,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d539d0756919905646bd69b0bfeffc5,},Annotations:map[string]string{io.kubernetes.
container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f2bc549c414cff2ca0c6a6112bdd1714da74d43a9810061be4cb7dfa1177c9b,PodSandboxId:7bbfa4fb20c5187cce7d6e7a4b19519400bd48aec1dddcca0979193771f9c8f0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1738725109169875433,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da5613037671f10c90f2e79dddf1916e,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.r
estartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58251f22ebcb311651a348e2e8b1c9eaddb5028a454a73ddd868d2d27f6dfdcd,PodSandboxId:4c7e1e87ced6439d52516692ed679ff7f7111593b6d3f5936086d3aa98e48ba1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1738725109098142841,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dwrtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a97e2a0-0706-4603-8471-b77d9645621a,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:394185c489c99169a014f18ddb655f9d0e65f36644a3e0db6cd019393a37bb93,PodSandboxId:769570524c5d0fa0e001e6717b7230c3d5f6b5909ba02d0d8218e7dbe72327db,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1738725109002456479,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d0ba0cbd041357457de9134c2ddaa5e,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b1cdef4f58308907a18141c78517f42a3abcd7b6a56598d3ba5876cdb59c11e,PodSandboxId:14ac55c55b0959908abe3581d16edb85f29e16a7052b1001d73e84312a077503,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1738725097087905475,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-wtrdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8021c0ec-321e-485c-8be5-8c775a1a6bd6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8f5b5f6b-5728-4317-a133-437786c2a329 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:12:28 pause-922984 crio[2931]: time="2025-02-05 03:12:28.900138489Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=857a1cad-4d4c-4d30-adb8-ed6ae14dd2ac name=/runtime.v1.RuntimeService/Version
	Feb 05 03:12:28 pause-922984 crio[2931]: time="2025-02-05 03:12:28.900212694Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=857a1cad-4d4c-4d30-adb8-ed6ae14dd2ac name=/runtime.v1.RuntimeService/Version
	Feb 05 03:12:28 pause-922984 crio[2931]: time="2025-02-05 03:12:28.901274174Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c9180507-2362-46aa-b812-d3a055384ecf name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 03:12:28 pause-922984 crio[2931]: time="2025-02-05 03:12:28.902291911Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738725148902255846,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c9180507-2362-46aa-b812-d3a055384ecf name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 03:12:28 pause-922984 crio[2931]: time="2025-02-05 03:12:28.905470959Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3e68d47d-f1dc-4434-9e64-59831798fb31 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:12:28 pause-922984 crio[2931]: time="2025-02-05 03:12:28.905572520Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3e68d47d-f1dc-4434-9e64-59831798fb31 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:12:28 pause-922984 crio[2931]: time="2025-02-05 03:12:28.905825763Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db8d434450a5cfc14524cc376aa4b79126a93e9d17e9636e877a52468b58bd77,PodSandboxId:4c7e1e87ced6439d52516692ed679ff7f7111593b6d3f5936086d3aa98e48ba1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1738725130327729708,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dwrtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a97e2a0-0706-4603-8471-b77d9645621a,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60086a7d780e3e08daa6e15b4c6f1ea956f24b051ee07502a89f0b0a786123f1,PodSandboxId:f87380e0e0e408b3b74124142e63e403abe06d98fafe5b8769256019eaebcb5a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1738725130317812537,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-wtrdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8021c0ec-321e-485c-8be5-8c775a1a6bd6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57f43660cc317d03bbc6e9da03acf5d756cb2b52fdd9fb37056777ad8cf021ff,PodSandboxId:bff9d333dac430283985da87e91d13dceabf8a45a0390ab9a602ed27fef9e5da,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1738725125702892301,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08a1e00469
11418f9566e5cbc595a92c,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75b5eaa59adcef8e2d9b77f8b4652dbd141d5356b207fc0439b32d502da9947d,PodSandboxId:47b15fee57f299126533e3910c66bfee0bc47f7057717d630772fb712a5dc2ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1738725125673470983,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
d539d0756919905646bd69b0bfeffc5,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27f8f0eea5d0f5c9c03dca3fb1e7e750817c6c3016c056cd4933497f5c9a1df3,PodSandboxId:769570524c5d0fa0e001e6717b7230c3d5f6b5909ba02d0d8218e7dbe72327db,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1738725125674724884,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d0ba0cbd041357457de
9134c2ddaa5e,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9927c9020d4e0f78713fd5f987dc955646e0c45ee094446f894d94da249bbea8,PodSandboxId:7bbfa4fb20c5187cce7d6e7a4b19519400bd48aec1dddcca0979193771f9c8f0,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1738725125652972873,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da5613037671f10c90f2e79dddf1916e,},Annotations:map[string]string{io.
kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:755358df556dce6f2e48e92d581a1094200b24c9bdc77d425911145cd466d35b,PodSandboxId:bff9d333dac430283985da87e91d13dceabf8a45a0390ab9a602ed27fef9e5da,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1738725109264538595,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08a1e0046911418f9566e5cbc595a92c,},Annotations:map[string]string{io.kubernetes.containe
r.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceebda47a58d2ce8cf7d15ecb18287cddd291719c8c903043598bc9d961b5d9d,PodSandboxId:47b15fee57f299126533e3910c66bfee0bc47f7057717d630772fb712a5dc2ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1738725109218815338,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d539d0756919905646bd69b0bfeffc5,},Annotations:map[string]string{io.kubernetes.
container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f2bc549c414cff2ca0c6a6112bdd1714da74d43a9810061be4cb7dfa1177c9b,PodSandboxId:7bbfa4fb20c5187cce7d6e7a4b19519400bd48aec1dddcca0979193771f9c8f0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1738725109169875433,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da5613037671f10c90f2e79dddf1916e,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.r
estartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58251f22ebcb311651a348e2e8b1c9eaddb5028a454a73ddd868d2d27f6dfdcd,PodSandboxId:4c7e1e87ced6439d52516692ed679ff7f7111593b6d3f5936086d3aa98e48ba1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1738725109098142841,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dwrtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a97e2a0-0706-4603-8471-b77d9645621a,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:394185c489c99169a014f18ddb655f9d0e65f36644a3e0db6cd019393a37bb93,PodSandboxId:769570524c5d0fa0e001e6717b7230c3d5f6b5909ba02d0d8218e7dbe72327db,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1738725109002456479,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d0ba0cbd041357457de9134c2ddaa5e,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b1cdef4f58308907a18141c78517f42a3abcd7b6a56598d3ba5876cdb59c11e,PodSandboxId:14ac55c55b0959908abe3581d16edb85f29e16a7052b1001d73e84312a077503,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1738725097087905475,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-wtrdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8021c0ec-321e-485c-8be5-8c775a1a6bd6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3e68d47d-f1dc-4434-9e64-59831798fb31 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:12:28 pause-922984 crio[2931]: time="2025-02-05 03:12:28.947167541Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=da78c720-92b6-4f4e-8447-c301d1f0a09f name=/runtime.v1.RuntimeService/Version
	Feb 05 03:12:28 pause-922984 crio[2931]: time="2025-02-05 03:12:28.947241648Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=da78c720-92b6-4f4e-8447-c301d1f0a09f name=/runtime.v1.RuntimeService/Version
	Feb 05 03:12:28 pause-922984 crio[2931]: time="2025-02-05 03:12:28.949472283Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1fec388a-2c5b-4771-8cc4-65329dab9c00 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 03:12:28 pause-922984 crio[2931]: time="2025-02-05 03:12:28.949831188Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738725148949807579,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1fec388a-2c5b-4771-8cc4-65329dab9c00 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 03:12:28 pause-922984 crio[2931]: time="2025-02-05 03:12:28.950305405Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=609d0f94-afb0-45ae-94e7-f6d111cb4781 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:12:28 pause-922984 crio[2931]: time="2025-02-05 03:12:28.950415651Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=609d0f94-afb0-45ae-94e7-f6d111cb4781 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:12:28 pause-922984 crio[2931]: time="2025-02-05 03:12:28.950647419Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db8d434450a5cfc14524cc376aa4b79126a93e9d17e9636e877a52468b58bd77,PodSandboxId:4c7e1e87ced6439d52516692ed679ff7f7111593b6d3f5936086d3aa98e48ba1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1738725130327729708,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dwrtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a97e2a0-0706-4603-8471-b77d9645621a,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60086a7d780e3e08daa6e15b4c6f1ea956f24b051ee07502a89f0b0a786123f1,PodSandboxId:f87380e0e0e408b3b74124142e63e403abe06d98fafe5b8769256019eaebcb5a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1738725130317812537,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-wtrdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8021c0ec-321e-485c-8be5-8c775a1a6bd6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57f43660cc317d03bbc6e9da03acf5d756cb2b52fdd9fb37056777ad8cf021ff,PodSandboxId:bff9d333dac430283985da87e91d13dceabf8a45a0390ab9a602ed27fef9e5da,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1738725125702892301,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08a1e00469
11418f9566e5cbc595a92c,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75b5eaa59adcef8e2d9b77f8b4652dbd141d5356b207fc0439b32d502da9947d,PodSandboxId:47b15fee57f299126533e3910c66bfee0bc47f7057717d630772fb712a5dc2ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1738725125673470983,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
d539d0756919905646bd69b0bfeffc5,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27f8f0eea5d0f5c9c03dca3fb1e7e750817c6c3016c056cd4933497f5c9a1df3,PodSandboxId:769570524c5d0fa0e001e6717b7230c3d5f6b5909ba02d0d8218e7dbe72327db,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1738725125674724884,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d0ba0cbd041357457de
9134c2ddaa5e,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9927c9020d4e0f78713fd5f987dc955646e0c45ee094446f894d94da249bbea8,PodSandboxId:7bbfa4fb20c5187cce7d6e7a4b19519400bd48aec1dddcca0979193771f9c8f0,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1738725125652972873,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da5613037671f10c90f2e79dddf1916e,},Annotations:map[string]string{io.
kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:755358df556dce6f2e48e92d581a1094200b24c9bdc77d425911145cd466d35b,PodSandboxId:bff9d333dac430283985da87e91d13dceabf8a45a0390ab9a602ed27fef9e5da,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1738725109264538595,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08a1e0046911418f9566e5cbc595a92c,},Annotations:map[string]string{io.kubernetes.containe
r.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceebda47a58d2ce8cf7d15ecb18287cddd291719c8c903043598bc9d961b5d9d,PodSandboxId:47b15fee57f299126533e3910c66bfee0bc47f7057717d630772fb712a5dc2ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1738725109218815338,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d539d0756919905646bd69b0bfeffc5,},Annotations:map[string]string{io.kubernetes.
container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f2bc549c414cff2ca0c6a6112bdd1714da74d43a9810061be4cb7dfa1177c9b,PodSandboxId:7bbfa4fb20c5187cce7d6e7a4b19519400bd48aec1dddcca0979193771f9c8f0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1738725109169875433,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da5613037671f10c90f2e79dddf1916e,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.r
estartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58251f22ebcb311651a348e2e8b1c9eaddb5028a454a73ddd868d2d27f6dfdcd,PodSandboxId:4c7e1e87ced6439d52516692ed679ff7f7111593b6d3f5936086d3aa98e48ba1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1738725109098142841,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dwrtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a97e2a0-0706-4603-8471-b77d9645621a,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:394185c489c99169a014f18ddb655f9d0e65f36644a3e0db6cd019393a37bb93,PodSandboxId:769570524c5d0fa0e001e6717b7230c3d5f6b5909ba02d0d8218e7dbe72327db,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1738725109002456479,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-922984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d0ba0cbd041357457de9134c2ddaa5e,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b1cdef4f58308907a18141c78517f42a3abcd7b6a56598d3ba5876cdb59c11e,PodSandboxId:14ac55c55b0959908abe3581d16edb85f29e16a7052b1001d73e84312a077503,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1738725097087905475,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-wtrdd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8021c0ec-321e-485c-8be5-8c775a1a6bd6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=609d0f94-afb0-45ae-94e7-f6d111cb4781 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	db8d434450a5c       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a   18 seconds ago      Running             kube-proxy                3                   4c7e1e87ced64       kube-proxy-dwrtm
	60086a7d780e3       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   18 seconds ago      Running             coredns                   2                   f87380e0e0e40       coredns-668d6bf9bc-wtrdd
	57f43660cc317       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1   23 seconds ago      Running             kube-scheduler            3                   bff9d333dac43       kube-scheduler-pause-922984
	27f8f0eea5d0f       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a   23 seconds ago      Running             kube-apiserver            3                   769570524c5d0       kube-apiserver-pause-922984
	75b5eaa59adce       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35   23 seconds ago      Running             kube-controller-manager   3                   47b15fee57f29       kube-controller-manager-pause-922984
	9927c9020d4e0       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   23 seconds ago      Running             etcd                      3                   7bbfa4fb20c51       etcd-pause-922984
	755358df556dc       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1   39 seconds ago      Exited              kube-scheduler            2                   bff9d333dac43       kube-scheduler-pause-922984
	ceebda47a58d2       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35   39 seconds ago      Exited              kube-controller-manager   2                   47b15fee57f29       kube-controller-manager-pause-922984
	5f2bc549c414c       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   39 seconds ago      Exited              etcd                      2                   7bbfa4fb20c51       etcd-pause-922984
	58251f22ebcb3       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a   39 seconds ago      Exited              kube-proxy                2                   4c7e1e87ced64       kube-proxy-dwrtm
	394185c489c99       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a   40 seconds ago      Exited              kube-apiserver            2                   769570524c5d0       kube-apiserver-pause-922984
	0b1cdef4f5830       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   51 seconds ago      Exited              coredns                   1                   14ac55c55b095       coredns-668d6bf9bc-wtrdd
	
	
	==> coredns [0b1cdef4f58308907a18141c78517f42a3abcd7b6a56598d3ba5876cdb59c11e] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:35859 - 63632 "HINFO IN 7075890073473774007.5113352051000378399. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.03451235s
	
	
	==> coredns [60086a7d780e3e08daa6e15b4c6f1ea956f24b051ee07502a89f0b0a786123f1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:52742 - 21352 "HINFO IN 1416766307856171938.83712086921647280. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.021559449s
	
	
	==> describe nodes <==
	Name:               pause-922984
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-922984
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e14c106b6cd223b80b8f7425af26284a33f75c3d
	                    minikube.k8s.io/name=pause-922984
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_02_05T03_11_12_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 05 Feb 2025 03:11:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-922984
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 05 Feb 2025 03:12:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 05 Feb 2025 03:12:09 +0000   Wed, 05 Feb 2025 03:11:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 05 Feb 2025 03:12:09 +0000   Wed, 05 Feb 2025 03:11:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 05 Feb 2025 03:12:09 +0000   Wed, 05 Feb 2025 03:11:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 05 Feb 2025 03:12:09 +0000   Wed, 05 Feb 2025 03:11:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.73
	  Hostname:    pause-922984
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 5f93e6d9b72644108bc8444d617888a4
	  System UUID:                5f93e6d9-b726-4410-8bc8-444d617888a4
	  Boot ID:                    beb2962c-4dd2-424e-afae-230b83170edc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-wtrdd                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     74s
	  kube-system                 etcd-pause-922984                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         78s
	  kube-system                 kube-apiserver-pause-922984             250m (12%)    0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 kube-controller-manager-pause-922984    200m (10%)    0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-proxy-dwrtm                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 kube-scheduler-pause-922984             100m (5%)     0 (0%)      0 (0%)           0 (0%)         78s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 72s                kube-proxy       
	  Normal  Starting                 18s                kube-proxy       
	  Normal  Starting                 36s                kube-proxy       
	  Normal  NodeHasSufficientMemory  78s                kubelet          Node pause-922984 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    78s                kubelet          Node pause-922984 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     78s                kubelet          Node pause-922984 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  78s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 78s                kubelet          Starting kubelet.
	  Normal  NodeReady                77s                kubelet          Node pause-922984 status is now: NodeReady
	  Normal  RegisteredNode           75s                node-controller  Node pause-922984 event: Registered Node pause-922984 in Controller
	  Normal  Starting                 25s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24s (x8 over 24s)  kubelet          Node pause-922984 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 24s)  kubelet          Node pause-922984 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 24s)  kubelet          Node pause-922984 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           17s                node-controller  Node pause-922984 event: Registered Node pause-922984 in Controller
	
	
	==> dmesg <==
	[  +0.065890] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.051027] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.181765] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.153482] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.268000] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +4.025862] systemd-fstab-generator[736]: Ignoring "noauto" option for root device
	[Feb 5 03:11] systemd-fstab-generator[867]: Ignoring "noauto" option for root device
	[  +0.060374] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.513206] systemd-fstab-generator[1215]: Ignoring "noauto" option for root device
	[  +0.105699] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.227066] kauditd_printk_skb: 31 callbacks suppressed
	[  +0.124217] systemd-fstab-generator[1484]: Ignoring "noauto" option for root device
	[ +11.062865] kauditd_printk_skb: 81 callbacks suppressed
	[  +9.278071] systemd-fstab-generator[2801]: Ignoring "noauto" option for root device
	[  +0.210307] systemd-fstab-generator[2819]: Ignoring "noauto" option for root device
	[  +0.269074] systemd-fstab-generator[2861]: Ignoring "noauto" option for root device
	[  +0.203218] systemd-fstab-generator[2892]: Ignoring "noauto" option for root device
	[  +0.371512] systemd-fstab-generator[2924]: Ignoring "noauto" option for root device
	[ +10.587790] systemd-fstab-generator[3189]: Ignoring "noauto" option for root device
	[  +0.071520] kauditd_printk_skb: 173 callbacks suppressed
	[  +5.491848] kauditd_printk_skb: 89 callbacks suppressed
	[Feb 5 03:12] systemd-fstab-generator[4088]: Ignoring "noauto" option for root device
	[  +5.599000] kauditd_printk_skb: 44 callbacks suppressed
	[ +13.226303] systemd-fstab-generator[4575]: Ignoring "noauto" option for root device
	[  +0.097871] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [5f2bc549c414cff2ca0c6a6112bdd1714da74d43a9810061be4cb7dfa1177c9b] <==
	{"level":"info","ts":"2025-02-05T03:11:51.369446Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c465966f5ecfebb3 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-02-05T03:11:51.369492Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c465966f5ecfebb3 received MsgPreVoteResp from c465966f5ecfebb3 at term 2"}
	{"level":"info","ts":"2025-02-05T03:11:51.369507Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c465966f5ecfebb3 became candidate at term 3"}
	{"level":"info","ts":"2025-02-05T03:11:51.369512Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c465966f5ecfebb3 received MsgVoteResp from c465966f5ecfebb3 at term 3"}
	{"level":"info","ts":"2025-02-05T03:11:51.369520Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c465966f5ecfebb3 became leader at term 3"}
	{"level":"info","ts":"2025-02-05T03:11:51.369526Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c465966f5ecfebb3 elected leader c465966f5ecfebb3 at term 3"}
	{"level":"info","ts":"2025-02-05T03:11:51.372541Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-05T03:11:51.373397Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-05T03:11:51.372493Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"c465966f5ecfebb3","local-member-attributes":"{Name:pause-922984 ClientURLs:[https://192.168.50.73:2379]}","request-path":"/0/members/c465966f5ecfebb3/attributes","cluster-id":"ee292103c115fe9e","publish-timeout":"7s"}
	{"level":"info","ts":"2025-02-05T03:11:51.373688Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-05T03:11:51.373916Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-02-05T03:11:51.373953Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-02-05T03:11:51.374143Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.73:2379"}
	{"level":"info","ts":"2025-02-05T03:11:51.374384Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-05T03:11:51.374930Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-02-05T03:11:52.959931Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-02-05T03:11:52.960024Z","caller":"embed/etcd.go:378","msg":"closing etcd server","name":"pause-922984","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.73:2380"],"advertise-client-urls":["https://192.168.50.73:2379"]}
	{"level":"warn","ts":"2025-02-05T03:11:52.960144Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-02-05T03:11:52.960256Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-02-05T03:11:52.977319Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.73:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-02-05T03:11:52.977457Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.73:2379: use of closed network connection"}
	{"level":"info","ts":"2025-02-05T03:11:52.977585Z","caller":"etcdserver/server.go:1543","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"c465966f5ecfebb3","current-leader-member-id":"c465966f5ecfebb3"}
	{"level":"info","ts":"2025-02-05T03:11:52.984141Z","caller":"embed/etcd.go:582","msg":"stopping serving peer traffic","address":"192.168.50.73:2380"}
	{"level":"info","ts":"2025-02-05T03:11:52.984309Z","caller":"embed/etcd.go:587","msg":"stopped serving peer traffic","address":"192.168.50.73:2380"}
	{"level":"info","ts":"2025-02-05T03:11:52.984453Z","caller":"embed/etcd.go:380","msg":"closed etcd server","name":"pause-922984","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.73:2380"],"advertise-client-urls":["https://192.168.50.73:2379"]}
	
	
	==> etcd [9927c9020d4e0f78713fd5f987dc955646e0c45ee094446f894d94da249bbea8] <==
	{"level":"warn","ts":"2025-02-05T03:12:13.617729Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"641.272165ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/kube-apiserver-legacy-service-account-token-tracking\" limit:1 ","response":"range_response_count:1 size:370"}
	{"level":"info","ts":"2025-02-05T03:12:13.619684Z","caller":"traceutil/trace.go:171","msg":"trace[2095290305] range","detail":"{range_begin:/registry/configmaps/kube-system/kube-apiserver-legacy-service-account-token-tracking; range_end:; response_count:1; response_revision:425; }","duration":"643.244471ms","start":"2025-02-05T03:12:12.976422Z","end":"2025-02-05T03:12:13.619666Z","steps":["trace[2095290305] 'agreement among raft nodes before linearized reading'  (duration: 641.2546ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-05T03:12:13.618016Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"365.237924ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" limit:1 ","response":"range_response_count:1 size:4133"}
	{"level":"info","ts":"2025-02-05T03:12:13.619829Z","caller":"traceutil/trace.go:171","msg":"trace[1530625671] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:425; }","duration":"367.079513ms","start":"2025-02-05T03:12:13.252737Z","end":"2025-02-05T03:12:13.619816Z","steps":["trace[1530625671] 'agreement among raft nodes before linearized reading'  (duration: 365.203423ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-05T03:12:13.619891Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-05T03:12:13.252724Z","time spent":"367.150776ms","remote":"127.0.0.1:39480","response type":"/etcdserverpb.KV/Range","request count":0,"request size":45,"response count":1,"response size":4157,"request content":"key:\"/registry/deployments/kube-system/coredns\" limit:1 "}
	{"level":"warn","ts":"2025-02-05T03:12:13.619856Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-05T03:12:12.976412Z","time spent":"643.381316ms","remote":"127.0.0.1:39154","response type":"/etcdserverpb.KV/Range","request count":0,"request size":89,"response count":1,"response size":394,"request content":"key:\"/registry/configmaps/kube-system/kube-apiserver-legacy-service-account-token-tracking\" limit:1 "}
	{"level":"warn","ts":"2025-02-05T03:12:14.336548Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"327.70388ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16984082258399836864 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/replicasets/kube-system/coredns-668d6bf9bc\" mod_revision:424 > success:<request_put:<key:\"/registry/replicasets/kube-system/coredns-668d6bf9bc\" value_size:3770 >> failure:<request_range:<key:\"/registry/replicasets/kube-system/coredns-668d6bf9bc\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-02-05T03:12:14.336957Z","caller":"traceutil/trace.go:171","msg":"trace[875598015] linearizableReadLoop","detail":"{readStateIndex:467; appliedIndex:465; }","duration":"695.959195ms","start":"2025-02-05T03:12:13.640980Z","end":"2025-02-05T03:12:14.336939Z","steps":["trace[875598015] 'read index received'  (duration: 367.769247ms)","trace[875598015] 'applied index is now lower than readState.Index'  (duration: 328.189432ms)"],"step_count":2}
	{"level":"info","ts":"2025-02-05T03:12:14.337231Z","caller":"traceutil/trace.go:171","msg":"trace[2024005879] transaction","detail":"{read_only:false; response_revision:426; number_of_response:1; }","duration":"698.486237ms","start":"2025-02-05T03:12:13.638733Z","end":"2025-02-05T03:12:14.337219Z","steps":["trace[2024005879] 'process raft request'  (duration: 370.060285ms)","trace[2024005879] 'compare'  (duration: 327.390866ms)"],"step_count":2}
	{"level":"warn","ts":"2025-02-05T03:12:14.338645Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-05T03:12:13.638709Z","time spent":"699.886542ms","remote":"127.0.0.1:39516","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3830,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/replicasets/kube-system/coredns-668d6bf9bc\" mod_revision:424 > success:<request_put:<key:\"/registry/replicasets/kube-system/coredns-668d6bf9bc\" value_size:3770 >> failure:<request_range:<key:\"/registry/replicasets/kube-system/coredns-668d6bf9bc\" > >"}
	{"level":"info","ts":"2025-02-05T03:12:14.337369Z","caller":"traceutil/trace.go:171","msg":"trace[2045710403] transaction","detail":"{read_only:false; response_revision:427; number_of_response:1; }","duration":"698.107859ms","start":"2025-02-05T03:12:13.639216Z","end":"2025-02-05T03:12:14.337324Z","steps":["trace[2045710403] 'process raft request'  (duration: 697.64756ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-05T03:12:14.338834Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-05T03:12:13.639200Z","time spent":"699.582262ms","remote":"127.0.0.1:39480","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4118,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:422 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4069 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >"}
	{"level":"info","ts":"2025-02-05T03:12:14.337393Z","caller":"traceutil/trace.go:171","msg":"trace[1781317986] transaction","detail":"{read_only:false; response_revision:428; number_of_response:1; }","duration":"601.778971ms","start":"2025-02-05T03:12:13.735608Z","end":"2025-02-05T03:12:14.337387Z","steps":["trace[1781317986] 'process raft request'  (duration: 601.308145ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-05T03:12:14.337479Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"696.483953ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-922984\" limit:1 ","response":"range_response_count:1 size:5840"}
	{"level":"info","ts":"2025-02-05T03:12:14.339024Z","caller":"traceutil/trace.go:171","msg":"trace[1223521871] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-922984; range_end:; response_count:1; response_revision:428; }","duration":"698.060131ms","start":"2025-02-05T03:12:13.640950Z","end":"2025-02-05T03:12:14.339010Z","steps":["trace[1223521871] 'agreement among raft nodes before linearized reading'  (duration: 696.452287ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-05T03:12:14.339094Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-05T03:12:13.640937Z","time spent":"698.143829ms","remote":"127.0.0.1:39236","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":1,"response size":5864,"request content":"key:\"/registry/pods/kube-system/etcd-pause-922984\" limit:1 "}
	{"level":"warn","ts":"2025-02-05T03:12:14.339277Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-05T03:12:13.735585Z","time spent":"603.659117ms","remote":"127.0.0.1:39130","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":616,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/default/pause-922984.18213149fd504b3c\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-922984.18213149fd504b3c\" value_size:544 lease:7760710221545061038 >> failure:<>"}
	{"level":"warn","ts":"2025-02-05T03:12:14.845481Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"215.544591ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16984082258399836869 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/pause-922984.1821314a06311815\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-922984.1821314a06311815\" value_size:598 lease:7760710221545061038 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-02-05T03:12:14.845641Z","caller":"traceutil/trace.go:171","msg":"trace[123226104] transaction","detail":"{read_only:false; response_revision:430; number_of_response:1; }","duration":"222.548058ms","start":"2025-02-05T03:12:14.623084Z","end":"2025-02-05T03:12:14.845632Z","steps":["trace[123226104] 'process raft request'  (duration: 222.490147ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-05T03:12:14.845658Z","caller":"traceutil/trace.go:171","msg":"trace[168612774] linearizableReadLoop","detail":"{readStateIndex:469; appliedIndex:468; }","duration":"499.946854ms","start":"2025-02-05T03:12:14.345695Z","end":"2025-02-05T03:12:14.845642Z","steps":["trace[168612774] 'read index received'  (duration: 284.198024ms)","trace[168612774] 'applied index is now lower than readState.Index'  (duration: 215.74757ms)"],"step_count":2}
	{"level":"warn","ts":"2025-02-05T03:12:14.845800Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"500.121758ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-922984\" limit:1 ","response":"range_response_count:1 size:5840"}
	{"level":"info","ts":"2025-02-05T03:12:14.847710Z","caller":"traceutil/trace.go:171","msg":"trace[1421704583] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-922984; range_end:; response_count:1; response_revision:430; }","duration":"502.054392ms","start":"2025-02-05T03:12:14.345645Z","end":"2025-02-05T03:12:14.847700Z","steps":["trace[1421704583] 'agreement among raft nodes before linearized reading'  (duration: 500.087458ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-05T03:12:14.847767Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-05T03:12:14.345634Z","time spent":"502.116232ms","remote":"127.0.0.1:39236","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":1,"response size":5864,"request content":"key:\"/registry/pods/kube-system/etcd-pause-922984\" limit:1 "}
	{"level":"info","ts":"2025-02-05T03:12:14.845984Z","caller":"traceutil/trace.go:171","msg":"trace[930455332] transaction","detail":"{read_only:false; response_revision:429; number_of_response:1; }","duration":"502.397111ms","start":"2025-02-05T03:12:14.343576Z","end":"2025-02-05T03:12:14.845973Z","steps":["trace[930455332] 'process raft request'  (duration: 286.191491ms)","trace[930455332] 'compare'  (duration: 215.426254ms)"],"step_count":2}
	{"level":"warn","ts":"2025-02-05T03:12:14.847937Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-05T03:12:14.343560Z","time spent":"504.347437ms","remote":"127.0.0.1:39130","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":670,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/default/pause-922984.1821314a06311815\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-922984.1821314a06311815\" value_size:598 lease:7760710221545061038 >> failure:<>"}
	
	
	==> kernel <==
	 03:12:29 up 1 min,  0 users,  load average: 1.09, 0.36, 0.13
	Linux pause-922984 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [27f8f0eea5d0f5c9c03dca3fb1e7e750817c6c3016c056cd4933497f5c9a1df3] <==
	I0205 03:12:09.074137       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0205 03:12:09.074272       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0205 03:12:09.074544       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0205 03:12:09.074684       1 shared_informer.go:320] Caches are synced for configmaps
	I0205 03:12:09.074743       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0205 03:12:09.073901       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0205 03:12:09.075616       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0205 03:12:09.077564       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0205 03:12:09.085841       1 aggregator.go:171] initial CRD sync complete...
	I0205 03:12:09.085915       1 autoregister_controller.go:144] Starting autoregister controller
	I0205 03:12:09.086010       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0205 03:12:09.086034       1 cache.go:39] Caches are synced for autoregister controller
	I0205 03:12:09.089748       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0205 03:12:09.090930       1 policy_source.go:240] refreshing policies
	I0205 03:12:09.103066       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0205 03:12:09.144811       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0205 03:12:09.978280       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0205 03:12:10.037928       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0205 03:12:10.813938       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0205 03:12:10.860925       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0205 03:12:10.895234       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0205 03:12:10.902129       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0205 03:12:12.280576       1 controller.go:615] quota admission added evaluator for: endpoints
	I0205 03:12:12.975393       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0205 03:12:12.975797       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-apiserver [394185c489c99169a014f18ddb655f9d0e65f36644a3e0db6cd019393a37bb93] <==
	W0205 03:12:02.272309       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:12:02.382005       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:12:02.387543       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:12:02.392920       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:12:02.405588       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:12:02.425611       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:12:02.428991       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:12:02.434794       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:12:02.434905       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:12:02.544446       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:12:02.546998       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:12:02.563161       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:12:02.588672       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:12:02.647751       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:12:02.653119       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:12:02.719680       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:12:02.719748       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:12:02.764806       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:12:02.785593       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:12:02.789075       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:12:02.809871       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:12:02.831999       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:12:02.961699       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:12:03.071479       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0205 03:12:03.078209       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [75b5eaa59adcef8e2d9b77f8b4652dbd141d5356b207fc0439b32d502da9947d] <==
	I0205 03:12:12.274899       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0205 03:12:12.275130       1 shared_informer.go:320] Caches are synced for expand
	I0205 03:12:12.276318       1 shared_informer.go:320] Caches are synced for GC
	I0205 03:12:12.276422       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0205 03:12:12.276745       1 shared_informer.go:320] Caches are synced for cronjob
	I0205 03:12:12.276786       1 shared_informer.go:320] Caches are synced for taint
	I0205 03:12:12.276840       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0205 03:12:12.276927       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-922984"
	I0205 03:12:12.276981       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0205 03:12:12.277419       1 shared_informer.go:320] Caches are synced for PVC protection
	I0205 03:12:12.277606       1 shared_informer.go:320] Caches are synced for HPA
	I0205 03:12:12.282959       1 shared_informer.go:320] Caches are synced for daemon sets
	I0205 03:12:12.287277       1 shared_informer.go:320] Caches are synced for resource quota
	I0205 03:12:12.306439       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0205 03:12:12.308736       1 shared_informer.go:320] Caches are synced for garbage collector
	I0205 03:12:12.309974       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0205 03:12:12.315666       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0205 03:12:12.327419       1 shared_informer.go:320] Caches are synced for job
	I0205 03:12:12.329844       1 shared_informer.go:320] Caches are synced for resource quota
	I0205 03:12:12.331235       1 shared_informer.go:320] Caches are synced for persistent volume
	I0205 03:12:12.337501       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0205 03:12:13.619430       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	I0205 03:12:13.628695       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="1.352208873s"
	I0205 03:12:14.341185       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="712.367052ms"
	I0205 03:12:14.341417       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="165.816µs"
	
	
	==> kube-controller-manager [ceebda47a58d2ce8cf7d15ecb18287cddd291719c8c903043598bc9d961b5d9d] <==
	I0205 03:11:50.359394       1 serving.go:386] Generated self-signed cert in-memory
	I0205 03:11:50.856907       1 controllermanager.go:185] "Starting" version="v1.32.1"
	I0205 03:11:50.856993       1 controllermanager.go:187] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0205 03:11:50.858563       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0205 03:11:50.858763       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0205 03:11:50.858918       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0205 03:11:50.858996       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	
	
	==> kube-proxy [58251f22ebcb311651a348e2e8b1c9eaddb5028a454a73ddd868d2d27f6dfdcd] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0205 03:11:50.777543       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0205 03:11:52.702123       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.50.73"]
	E0205 03:11:52.710119       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0205 03:11:52.755974       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0205 03:11:52.756080       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0205 03:11:52.756108       1 server_linux.go:170] "Using iptables Proxier"
	I0205 03:11:52.766585       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0205 03:11:52.766939       1 server.go:497] "Version info" version="v1.32.1"
	I0205 03:11:52.767170       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0205 03:11:52.768801       1 config.go:199] "Starting service config controller"
	I0205 03:11:52.768906       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0205 03:11:52.769004       1 config.go:105] "Starting endpoint slice config controller"
	I0205 03:11:52.769025       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0205 03:11:52.769937       1 config.go:329] "Starting node config controller"
	I0205 03:11:52.769978       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0205 03:11:52.870744       1 shared_informer.go:320] Caches are synced for node config
	I0205 03:11:52.886209       1 shared_informer.go:320] Caches are synced for service config
	I0205 03:11:52.889215       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [db8d434450a5cfc14524cc376aa4b79126a93e9d17e9636e877a52468b58bd77] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0205 03:12:10.539209       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0205 03:12:10.550004       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.50.73"]
	E0205 03:12:10.550159       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0205 03:12:10.585210       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0205 03:12:10.587553       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0205 03:12:10.587599       1 server_linux.go:170] "Using iptables Proxier"
	I0205 03:12:10.590786       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0205 03:12:10.591184       1 server.go:497] "Version info" version="v1.32.1"
	I0205 03:12:10.591217       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0205 03:12:10.593462       1 config.go:199] "Starting service config controller"
	I0205 03:12:10.593510       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0205 03:12:10.593543       1 config.go:105] "Starting endpoint slice config controller"
	I0205 03:12:10.593565       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0205 03:12:10.594267       1 config.go:329] "Starting node config controller"
	I0205 03:12:10.594303       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0205 03:12:10.694268       1 shared_informer.go:320] Caches are synced for service config
	I0205 03:12:10.694309       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0205 03:12:10.694679       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [57f43660cc317d03bbc6e9da03acf5d756cb2b52fdd9fb37056777ad8cf021ff] <==
	I0205 03:12:07.669848       1 serving.go:386] Generated self-signed cert in-memory
	W0205 03:12:09.034937       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0205 03:12:09.034995       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0205 03:12:09.035008       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0205 03:12:09.035029       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0205 03:12:09.071690       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0205 03:12:09.071831       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0205 03:12:09.083576       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0205 03:12:09.085413       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0205 03:12:09.090375       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0205 03:12:09.085442       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0205 03:12:09.192926       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [755358df556dce6f2e48e92d581a1094200b24c9bdc77d425911145cd466d35b] <==
	I0205 03:11:50.553236       1 serving.go:386] Generated self-signed cert in-memory
	W0205 03:11:52.607517       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0205 03:11:52.607799       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0205 03:11:52.607892       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0205 03:11:52.607930       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0205 03:11:52.702924       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0205 03:11:52.703008       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0205 03:11:52.703078       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I0205 03:11:52.714407       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0205 03:11:52.715832       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0205 03:11:52.715995       1 shared_informer.go:316] "Unhandled Error" err="unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file" logger="UnhandledError"
	I0205 03:11:52.716704       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0205 03:11:52.716809       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0205 03:11:52.717034       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0205 03:11:52.717119       1 configmap_cafile_content.go:213] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0205 03:11:52.717238       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0205 03:11:52.718310       1 server.go:266] "waiting for handlers to sync" err="context canceled"
	E0205 03:11:52.718799       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Feb 05 03:12:08 pause-922984 kubelet[4095]: E0205 03:12:08.155593    4095 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-922984\" not found" node="pause-922984"
	Feb 05 03:12:08 pause-922984 kubelet[4095]: E0205 03:12:08.156203    4095 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-922984\" not found" node="pause-922984"
	Feb 05 03:12:09 pause-922984 kubelet[4095]: I0205 03:12:09.108936    4095 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-922984"
	Feb 05 03:12:09 pause-922984 kubelet[4095]: I0205 03:12:09.136177    4095 kubelet_node_status.go:125] "Node was previously registered" node="pause-922984"
	Feb 05 03:12:09 pause-922984 kubelet[4095]: I0205 03:12:09.136464    4095 kubelet_node_status.go:79] "Successfully registered node" node="pause-922984"
	Feb 05 03:12:09 pause-922984 kubelet[4095]: I0205 03:12:09.136584    4095 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Feb 05 03:12:09 pause-922984 kubelet[4095]: I0205 03:12:09.137849    4095 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Feb 05 03:12:09 pause-922984 kubelet[4095]: E0205 03:12:09.149074    4095 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-pause-922984\" already exists" pod="kube-system/etcd-pause-922984"
	Feb 05 03:12:09 pause-922984 kubelet[4095]: I0205 03:12:09.149314    4095 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-922984"
	Feb 05 03:12:09 pause-922984 kubelet[4095]: E0205 03:12:09.188032    4095 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-922984\" already exists" pod="kube-system/kube-apiserver-pause-922984"
	Feb 05 03:12:09 pause-922984 kubelet[4095]: I0205 03:12:09.188074    4095 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-922984"
	Feb 05 03:12:09 pause-922984 kubelet[4095]: E0205 03:12:09.204288    4095 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-922984\" already exists" pod="kube-system/kube-controller-manager-pause-922984"
	Feb 05 03:12:09 pause-922984 kubelet[4095]: I0205 03:12:09.204317    4095 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-922984"
	Feb 05 03:12:09 pause-922984 kubelet[4095]: E0205 03:12:09.209484    4095 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-922984\" already exists" pod="kube-system/kube-scheduler-pause-922984"
	Feb 05 03:12:09 pause-922984 kubelet[4095]: I0205 03:12:09.990977    4095 apiserver.go:52] "Watching apiserver"
	Feb 05 03:12:10 pause-922984 kubelet[4095]: I0205 03:12:10.002477    4095 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Feb 05 03:12:10 pause-922984 kubelet[4095]: I0205 03:12:10.033040    4095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5a97e2a0-0706-4603-8471-b77d9645621a-xtables-lock\") pod \"kube-proxy-dwrtm\" (UID: \"5a97e2a0-0706-4603-8471-b77d9645621a\") " pod="kube-system/kube-proxy-dwrtm"
	Feb 05 03:12:10 pause-922984 kubelet[4095]: I0205 03:12:10.033152    4095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a97e2a0-0706-4603-8471-b77d9645621a-lib-modules\") pod \"kube-proxy-dwrtm\" (UID: \"5a97e2a0-0706-4603-8471-b77d9645621a\") " pod="kube-system/kube-proxy-dwrtm"
	Feb 05 03:12:10 pause-922984 kubelet[4095]: I0205 03:12:10.296113    4095 scope.go:117] "RemoveContainer" containerID="58251f22ebcb311651a348e2e8b1c9eaddb5028a454a73ddd868d2d27f6dfdcd"
	Feb 05 03:12:10 pause-922984 kubelet[4095]: I0205 03:12:10.296747    4095 scope.go:117] "RemoveContainer" containerID="0b1cdef4f58308907a18141c78517f42a3abcd7b6a56598d3ba5876cdb59c11e"
	Feb 05 03:12:12 pause-922984 kubelet[4095]: I0205 03:12:12.455508    4095 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Feb 05 03:12:15 pause-922984 kubelet[4095]: E0205 03:12:15.157853    4095 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738725135156629787,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 05 03:12:15 pause-922984 kubelet[4095]: E0205 03:12:15.157905    4095 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738725135156629787,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 05 03:12:25 pause-922984 kubelet[4095]: E0205 03:12:25.160523    4095 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738725145159215926,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 05 03:12:25 pause-922984 kubelet[4095]: E0205 03:12:25.161095    4095 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738725145159215926,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-922984 -n pause-922984
helpers_test.go:261: (dbg) Run:  kubectl --context pause-922984 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (61.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (268.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-191773 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-191773 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m28.612827419s)

                                                
                                                
-- stdout --
	* [old-k8s-version-191773] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20363
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20363-12788/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20363-12788/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-191773" primary control-plane node in "old-k8s-version-191773" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0205 03:11:40.671428   60782 out.go:345] Setting OutFile to fd 1 ...
	I0205 03:11:40.671522   60782 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 03:11:40.671529   60782 out.go:358] Setting ErrFile to fd 2...
	I0205 03:11:40.671534   60782 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 03:11:40.671726   60782 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20363-12788/.minikube/bin
	I0205 03:11:40.672293   60782 out.go:352] Setting JSON to false
	I0205 03:11:40.673191   60782 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6852,"bootTime":1738718249,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0205 03:11:40.673284   60782 start.go:139] virtualization: kvm guest
	I0205 03:11:40.675592   60782 out.go:177] * [old-k8s-version-191773] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0205 03:11:40.676847   60782 out.go:177]   - MINIKUBE_LOCATION=20363
	I0205 03:11:40.676862   60782 notify.go:220] Checking for updates...
	I0205 03:11:40.679069   60782 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0205 03:11:40.680298   60782 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20363-12788/kubeconfig
	I0205 03:11:40.681573   60782 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20363-12788/.minikube
	I0205 03:11:40.682733   60782 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0205 03:11:40.683793   60782 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0205 03:11:40.685621   60782 config.go:182] Loaded profile config "cert-expiration-908105": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 03:11:40.685717   60782 config.go:182] Loaded profile config "kubernetes-upgrade-024079": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0205 03:11:40.685828   60782 config.go:182] Loaded profile config "pause-922984": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 03:11:40.685907   60782 driver.go:394] Setting default libvirt URI to qemu:///system
	I0205 03:11:40.721282   60782 out.go:177] * Using the kvm2 driver based on user configuration
	I0205 03:11:40.722411   60782 start.go:297] selected driver: kvm2
	I0205 03:11:40.722426   60782 start.go:901] validating driver "kvm2" against <nil>
	I0205 03:11:40.722436   60782 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0205 03:11:40.723144   60782 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0205 03:11:40.723224   60782 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20363-12788/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0205 03:11:40.738182   60782 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0205 03:11:40.738230   60782 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0205 03:11:40.738554   60782 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0205 03:11:40.738588   60782 cni.go:84] Creating CNI manager for ""
	I0205 03:11:40.738634   60782 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0205 03:11:40.738643   60782 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0205 03:11:40.738695   60782 start.go:340] cluster config:
	{Name:old-k8s-version-191773 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-191773 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I0205 03:11:40.738786   60782 iso.go:125] acquiring lock: {Name:mk486603e8d6546d81b9e7d0a893261360630790 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0205 03:11:40.740506   60782 out.go:177] * Starting "old-k8s-version-191773" primary control-plane node in "old-k8s-version-191773" cluster
	I0205 03:11:40.741611   60782 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0205 03:11:40.741644   60782 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20363-12788/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0205 03:11:40.741650   60782 cache.go:56] Caching tarball of preloaded images
	I0205 03:11:40.741718   60782 preload.go:172] Found /home/jenkins/minikube-integration/20363-12788/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0205 03:11:40.741728   60782 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0205 03:11:40.741806   60782 profile.go:143] Saving config to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/old-k8s-version-191773/config.json ...
	I0205 03:11:40.741823   60782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/old-k8s-version-191773/config.json: {Name:mkd389f8141f413ff5ed61ed7f2339fe6ed23959 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:11:40.741936   60782 start.go:360] acquireMachinesLock for old-k8s-version-191773: {Name:mka859d8706e94e04a549fcebb98cdac86bfe5a9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0205 03:11:40.741963   60782 start.go:364] duration metric: took 14.094µs to acquireMachinesLock for "old-k8s-version-191773"
	I0205 03:11:40.741977   60782 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-191773 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-versi
on-191773 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0205 03:11:40.742032   60782 start.go:125] createHost starting for "" (driver="kvm2")
	I0205 03:11:40.743419   60782 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0205 03:11:40.743555   60782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:11:40.743595   60782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:11:40.758033   60782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42229
	I0205 03:11:40.758517   60782 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:11:40.759062   60782 main.go:141] libmachine: Using API Version  1
	I0205 03:11:40.759082   60782 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:11:40.759446   60782 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:11:40.759644   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetMachineName
	I0205 03:11:40.759838   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .DriverName
	I0205 03:11:40.759997   60782 start.go:159] libmachine.API.Create for "old-k8s-version-191773" (driver="kvm2")
	I0205 03:11:40.760024   60782 client.go:168] LocalClient.Create starting
	I0205 03:11:40.760054   60782 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem
	I0205 03:11:40.760090   60782 main.go:141] libmachine: Decoding PEM data...
	I0205 03:11:40.760106   60782 main.go:141] libmachine: Parsing certificate...
	I0205 03:11:40.760164   60782 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20363-12788/.minikube/certs/cert.pem
	I0205 03:11:40.760183   60782 main.go:141] libmachine: Decoding PEM data...
	I0205 03:11:40.760206   60782 main.go:141] libmachine: Parsing certificate...
	I0205 03:11:40.760221   60782 main.go:141] libmachine: Running pre-create checks...
	I0205 03:11:40.760234   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .PreCreateCheck
	I0205 03:11:40.760551   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetConfigRaw
	I0205 03:11:40.760907   60782 main.go:141] libmachine: Creating machine...
	I0205 03:11:40.760919   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .Create
	I0205 03:11:40.761072   60782 main.go:141] libmachine: (old-k8s-version-191773) creating KVM machine...
	I0205 03:11:40.761087   60782 main.go:141] libmachine: (old-k8s-version-191773) creating network...
	I0205 03:11:40.762310   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | found existing default KVM network
	I0205 03:11:40.764055   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | I0205 03:11:40.763889   60805 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001157f0}
	I0205 03:11:40.764093   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | created network xml: 
	I0205 03:11:40.764109   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | <network>
	I0205 03:11:40.764125   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG |   <name>mk-old-k8s-version-191773</name>
	I0205 03:11:40.764140   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG |   <dns enable='no'/>
	I0205 03:11:40.764153   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG |   
	I0205 03:11:40.764166   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0205 03:11:40.764179   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG |     <dhcp>
	I0205 03:11:40.764202   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0205 03:11:40.764216   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG |     </dhcp>
	I0205 03:11:40.764226   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG |   </ip>
	I0205 03:11:40.764245   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG |   
	I0205 03:11:40.764255   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | </network>
	I0205 03:11:40.764286   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | 
	I0205 03:11:40.769522   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | trying to create private KVM network mk-old-k8s-version-191773 192.168.39.0/24...
	I0205 03:11:40.840387   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | private KVM network mk-old-k8s-version-191773 192.168.39.0/24 created
	I0205 03:11:40.840508   60782 main.go:141] libmachine: (old-k8s-version-191773) setting up store path in /home/jenkins/minikube-integration/20363-12788/.minikube/machines/old-k8s-version-191773 ...
	I0205 03:11:40.840531   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | I0205 03:11:40.840343   60805 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20363-12788/.minikube
	I0205 03:11:40.840551   60782 main.go:141] libmachine: (old-k8s-version-191773) building disk image from file:///home/jenkins/minikube-integration/20363-12788/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0205 03:11:40.840580   60782 main.go:141] libmachine: (old-k8s-version-191773) Downloading /home/jenkins/minikube-integration/20363-12788/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20363-12788/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0205 03:11:41.093421   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | I0205 03:11:41.093262   60805 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20363-12788/.minikube/machines/old-k8s-version-191773/id_rsa...
	I0205 03:11:41.231703   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | I0205 03:11:41.231590   60805 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20363-12788/.minikube/machines/old-k8s-version-191773/old-k8s-version-191773.rawdisk...
	I0205 03:11:41.231738   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | Writing magic tar header
	I0205 03:11:41.231754   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | Writing SSH key tar header
	I0205 03:11:41.231772   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | I0205 03:11:41.231731   60805 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20363-12788/.minikube/machines/old-k8s-version-191773 ...
	I0205 03:11:41.231904   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20363-12788/.minikube/machines/old-k8s-version-191773
	I0205 03:11:41.231939   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20363-12788/.minikube/machines
	I0205 03:11:41.231955   60782 main.go:141] libmachine: (old-k8s-version-191773) setting executable bit set on /home/jenkins/minikube-integration/20363-12788/.minikube/machines/old-k8s-version-191773 (perms=drwx------)
	I0205 03:11:41.231984   60782 main.go:141] libmachine: (old-k8s-version-191773) setting executable bit set on /home/jenkins/minikube-integration/20363-12788/.minikube/machines (perms=drwxr-xr-x)
	I0205 03:11:41.231990   60782 main.go:141] libmachine: (old-k8s-version-191773) setting executable bit set on /home/jenkins/minikube-integration/20363-12788/.minikube (perms=drwxr-xr-x)
	I0205 03:11:41.232004   60782 main.go:141] libmachine: (old-k8s-version-191773) setting executable bit set on /home/jenkins/minikube-integration/20363-12788 (perms=drwxrwxr-x)
	I0205 03:11:41.232019   60782 main.go:141] libmachine: (old-k8s-version-191773) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0205 03:11:41.232031   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20363-12788/.minikube
	I0205 03:11:41.232044   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20363-12788
	I0205 03:11:41.232057   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0205 03:11:41.232064   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | checking permissions on dir: /home/jenkins
	I0205 03:11:41.232071   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | checking permissions on dir: /home
	I0205 03:11:41.232095   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | skipping /home - not owner
	I0205 03:11:41.232112   60782 main.go:141] libmachine: (old-k8s-version-191773) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0205 03:11:41.232124   60782 main.go:141] libmachine: (old-k8s-version-191773) creating domain...
	I0205 03:11:41.233250   60782 main.go:141] libmachine: (old-k8s-version-191773) define libvirt domain using xml: 
	I0205 03:11:41.233270   60782 main.go:141] libmachine: (old-k8s-version-191773) <domain type='kvm'>
	I0205 03:11:41.233277   60782 main.go:141] libmachine: (old-k8s-version-191773)   <name>old-k8s-version-191773</name>
	I0205 03:11:41.233282   60782 main.go:141] libmachine: (old-k8s-version-191773)   <memory unit='MiB'>2200</memory>
	I0205 03:11:41.233288   60782 main.go:141] libmachine: (old-k8s-version-191773)   <vcpu>2</vcpu>
	I0205 03:11:41.233292   60782 main.go:141] libmachine: (old-k8s-version-191773)   <features>
	I0205 03:11:41.233297   60782 main.go:141] libmachine: (old-k8s-version-191773)     <acpi/>
	I0205 03:11:41.233301   60782 main.go:141] libmachine: (old-k8s-version-191773)     <apic/>
	I0205 03:11:41.233313   60782 main.go:141] libmachine: (old-k8s-version-191773)     <pae/>
	I0205 03:11:41.233320   60782 main.go:141] libmachine: (old-k8s-version-191773)     
	I0205 03:11:41.233324   60782 main.go:141] libmachine: (old-k8s-version-191773)   </features>
	I0205 03:11:41.233332   60782 main.go:141] libmachine: (old-k8s-version-191773)   <cpu mode='host-passthrough'>
	I0205 03:11:41.233350   60782 main.go:141] libmachine: (old-k8s-version-191773)   
	I0205 03:11:41.233355   60782 main.go:141] libmachine: (old-k8s-version-191773)   </cpu>
	I0205 03:11:41.233381   60782 main.go:141] libmachine: (old-k8s-version-191773)   <os>
	I0205 03:11:41.233410   60782 main.go:141] libmachine: (old-k8s-version-191773)     <type>hvm</type>
	I0205 03:11:41.233419   60782 main.go:141] libmachine: (old-k8s-version-191773)     <boot dev='cdrom'/>
	I0205 03:11:41.233427   60782 main.go:141] libmachine: (old-k8s-version-191773)     <boot dev='hd'/>
	I0205 03:11:41.233457   60782 main.go:141] libmachine: (old-k8s-version-191773)     <bootmenu enable='no'/>
	I0205 03:11:41.233485   60782 main.go:141] libmachine: (old-k8s-version-191773)   </os>
	I0205 03:11:41.233497   60782 main.go:141] libmachine: (old-k8s-version-191773)   <devices>
	I0205 03:11:41.233509   60782 main.go:141] libmachine: (old-k8s-version-191773)     <disk type='file' device='cdrom'>
	I0205 03:11:41.233521   60782 main.go:141] libmachine: (old-k8s-version-191773)       <source file='/home/jenkins/minikube-integration/20363-12788/.minikube/machines/old-k8s-version-191773/boot2docker.iso'/>
	I0205 03:11:41.233531   60782 main.go:141] libmachine: (old-k8s-version-191773)       <target dev='hdc' bus='scsi'/>
	I0205 03:11:41.233539   60782 main.go:141] libmachine: (old-k8s-version-191773)       <readonly/>
	I0205 03:11:41.233548   60782 main.go:141] libmachine: (old-k8s-version-191773)     </disk>
	I0205 03:11:41.233557   60782 main.go:141] libmachine: (old-k8s-version-191773)     <disk type='file' device='disk'>
	I0205 03:11:41.233581   60782 main.go:141] libmachine: (old-k8s-version-191773)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0205 03:11:41.233595   60782 main.go:141] libmachine: (old-k8s-version-191773)       <source file='/home/jenkins/minikube-integration/20363-12788/.minikube/machines/old-k8s-version-191773/old-k8s-version-191773.rawdisk'/>
	I0205 03:11:41.233602   60782 main.go:141] libmachine: (old-k8s-version-191773)       <target dev='hda' bus='virtio'/>
	I0205 03:11:41.233615   60782 main.go:141] libmachine: (old-k8s-version-191773)     </disk>
	I0205 03:11:41.233623   60782 main.go:141] libmachine: (old-k8s-version-191773)     <interface type='network'>
	I0205 03:11:41.233630   60782 main.go:141] libmachine: (old-k8s-version-191773)       <source network='mk-old-k8s-version-191773'/>
	I0205 03:11:41.233641   60782 main.go:141] libmachine: (old-k8s-version-191773)       <model type='virtio'/>
	I0205 03:11:41.233650   60782 main.go:141] libmachine: (old-k8s-version-191773)     </interface>
	I0205 03:11:41.233664   60782 main.go:141] libmachine: (old-k8s-version-191773)     <interface type='network'>
	I0205 03:11:41.233688   60782 main.go:141] libmachine: (old-k8s-version-191773)       <source network='default'/>
	I0205 03:11:41.233706   60782 main.go:141] libmachine: (old-k8s-version-191773)       <model type='virtio'/>
	I0205 03:11:41.233718   60782 main.go:141] libmachine: (old-k8s-version-191773)     </interface>
	I0205 03:11:41.233726   60782 main.go:141] libmachine: (old-k8s-version-191773)     <serial type='pty'>
	I0205 03:11:41.233734   60782 main.go:141] libmachine: (old-k8s-version-191773)       <target port='0'/>
	I0205 03:11:41.233739   60782 main.go:141] libmachine: (old-k8s-version-191773)     </serial>
	I0205 03:11:41.233747   60782 main.go:141] libmachine: (old-k8s-version-191773)     <console type='pty'>
	I0205 03:11:41.233752   60782 main.go:141] libmachine: (old-k8s-version-191773)       <target type='serial' port='0'/>
	I0205 03:11:41.233760   60782 main.go:141] libmachine: (old-k8s-version-191773)     </console>
	I0205 03:11:41.233765   60782 main.go:141] libmachine: (old-k8s-version-191773)     <rng model='virtio'>
	I0205 03:11:41.233781   60782 main.go:141] libmachine: (old-k8s-version-191773)       <backend model='random'>/dev/random</backend>
	I0205 03:11:41.233791   60782 main.go:141] libmachine: (old-k8s-version-191773)     </rng>
	I0205 03:11:41.233811   60782 main.go:141] libmachine: (old-k8s-version-191773)     
	I0205 03:11:41.233829   60782 main.go:141] libmachine: (old-k8s-version-191773)     
	I0205 03:11:41.233841   60782 main.go:141] libmachine: (old-k8s-version-191773)   </devices>
	I0205 03:11:41.233865   60782 main.go:141] libmachine: (old-k8s-version-191773) </domain>
	I0205 03:11:41.233879   60782 main.go:141] libmachine: (old-k8s-version-191773) 
	I0205 03:11:41.238314   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:f9:b3:7f in network default
	I0205 03:11:41.238833   60782 main.go:141] libmachine: (old-k8s-version-191773) starting domain...
	I0205 03:11:41.238848   60782 main.go:141] libmachine: (old-k8s-version-191773) ensuring networks are active...
	I0205 03:11:41.238861   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:11:41.239577   60782 main.go:141] libmachine: (old-k8s-version-191773) Ensuring network default is active
	I0205 03:11:41.239960   60782 main.go:141] libmachine: (old-k8s-version-191773) Ensuring network mk-old-k8s-version-191773 is active
	I0205 03:11:41.240542   60782 main.go:141] libmachine: (old-k8s-version-191773) getting domain XML...
	I0205 03:11:41.241275   60782 main.go:141] libmachine: (old-k8s-version-191773) creating domain...
	I0205 03:11:42.475404   60782 main.go:141] libmachine: (old-k8s-version-191773) waiting for IP...
	I0205 03:11:42.476394   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:11:42.476908   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | unable to find current IP address of domain old-k8s-version-191773 in network mk-old-k8s-version-191773
	I0205 03:11:42.476981   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | I0205 03:11:42.476912   60805 retry.go:31] will retry after 196.482731ms: waiting for domain to come up
	I0205 03:11:42.675369   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:11:42.675832   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | unable to find current IP address of domain old-k8s-version-191773 in network mk-old-k8s-version-191773
	I0205 03:11:42.675858   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | I0205 03:11:42.675798   60805 retry.go:31] will retry after 271.826905ms: waiting for domain to come up
	I0205 03:11:42.949236   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:11:42.949724   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | unable to find current IP address of domain old-k8s-version-191773 in network mk-old-k8s-version-191773
	I0205 03:11:42.949750   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | I0205 03:11:42.949681   60805 retry.go:31] will retry after 297.751701ms: waiting for domain to come up
	I0205 03:11:43.249228   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:11:43.249685   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | unable to find current IP address of domain old-k8s-version-191773 in network mk-old-k8s-version-191773
	I0205 03:11:43.249751   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | I0205 03:11:43.249676   60805 retry.go:31] will retry after 467.173579ms: waiting for domain to come up
	I0205 03:11:43.717992   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:11:43.718510   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | unable to find current IP address of domain old-k8s-version-191773 in network mk-old-k8s-version-191773
	I0205 03:11:43.718537   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | I0205 03:11:43.718480   60805 retry.go:31] will retry after 762.823451ms: waiting for domain to come up
	I0205 03:11:44.482442   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:11:44.482956   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | unable to find current IP address of domain old-k8s-version-191773 in network mk-old-k8s-version-191773
	I0205 03:11:44.482997   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | I0205 03:11:44.482942   60805 retry.go:31] will retry after 659.833499ms: waiting for domain to come up
	I0205 03:11:45.144837   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:11:45.145276   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | unable to find current IP address of domain old-k8s-version-191773 in network mk-old-k8s-version-191773
	I0205 03:11:45.145303   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | I0205 03:11:45.145242   60805 retry.go:31] will retry after 1.026080572s: waiting for domain to come up
	I0205 03:11:46.172529   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:11:46.172952   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | unable to find current IP address of domain old-k8s-version-191773 in network mk-old-k8s-version-191773
	I0205 03:11:46.172979   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | I0205 03:11:46.172912   60805 retry.go:31] will retry after 1.292954846s: waiting for domain to come up
	I0205 03:11:47.467651   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:11:47.468100   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | unable to find current IP address of domain old-k8s-version-191773 in network mk-old-k8s-version-191773
	I0205 03:11:47.468130   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | I0205 03:11:47.468061   60805 retry.go:31] will retry after 1.379560684s: waiting for domain to come up
	I0205 03:11:48.849673   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:11:48.850181   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | unable to find current IP address of domain old-k8s-version-191773 in network mk-old-k8s-version-191773
	I0205 03:11:48.850213   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | I0205 03:11:48.850137   60805 retry.go:31] will retry after 1.638893449s: waiting for domain to come up
	I0205 03:11:50.491249   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:11:50.491896   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | unable to find current IP address of domain old-k8s-version-191773 in network mk-old-k8s-version-191773
	I0205 03:11:50.491975   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | I0205 03:11:50.491876   60805 retry.go:31] will retry after 1.889793749s: waiting for domain to come up
	I0205 03:11:52.383103   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:11:52.383707   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | unable to find current IP address of domain old-k8s-version-191773 in network mk-old-k8s-version-191773
	I0205 03:11:52.383755   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | I0205 03:11:52.383683   60805 retry.go:31] will retry after 2.796178827s: waiting for domain to come up
	I0205 03:11:55.183542   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:11:55.183984   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | unable to find current IP address of domain old-k8s-version-191773 in network mk-old-k8s-version-191773
	I0205 03:11:55.184015   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | I0205 03:11:55.183957   60805 retry.go:31] will retry after 3.303291267s: waiting for domain to come up
	I0205 03:11:58.490824   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:11:58.491300   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | unable to find current IP address of domain old-k8s-version-191773 in network mk-old-k8s-version-191773
	I0205 03:11:58.491326   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | I0205 03:11:58.491262   60805 retry.go:31] will retry after 4.650233729s: waiting for domain to come up
	I0205 03:12:03.143388   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:12:03.143813   60782 main.go:141] libmachine: (old-k8s-version-191773) found domain IP: 192.168.39.74
	I0205 03:12:03.143838   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has current primary IP address 192.168.39.74 and MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:12:03.143846   60782 main.go:141] libmachine: (old-k8s-version-191773) reserving static IP address...
	I0205 03:12:03.144201   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-191773", mac: "52:54:00:87:fe:dd", ip: "192.168.39.74"} in network mk-old-k8s-version-191773
	I0205 03:12:03.222721   60782 main.go:141] libmachine: (old-k8s-version-191773) reserved static IP address 192.168.39.74 for domain old-k8s-version-191773
	I0205 03:12:03.222782   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | Getting to WaitForSSH function...
	I0205 03:12:03.222793   60782 main.go:141] libmachine: (old-k8s-version-191773) waiting for SSH...
	I0205 03:12:03.225675   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:12:03.226078   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fe:dd", ip: ""} in network mk-old-k8s-version-191773: {Iface:virbr2 ExpiryTime:2025-02-05 04:11:55 +0000 UTC Type:0 Mac:52:54:00:87:fe:dd Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:minikube Clientid:01:52:54:00:87:fe:dd}
	I0205 03:12:03.226119   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined IP address 192.168.39.74 and MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:12:03.226290   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | Using SSH client type: external
	I0205 03:12:03.226319   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | Using SSH private key: /home/jenkins/minikube-integration/20363-12788/.minikube/machines/old-k8s-version-191773/id_rsa (-rw-------)
	I0205 03:12:03.226359   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.74 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20363-12788/.minikube/machines/old-k8s-version-191773/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0205 03:12:03.226394   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | About to run SSH command:
	I0205 03:12:03.226417   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | exit 0
	I0205 03:12:03.353425   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | SSH cmd err, output: <nil>: 
	I0205 03:12:03.353723   60782 main.go:141] libmachine: (old-k8s-version-191773) KVM machine creation complete
	I0205 03:12:03.354024   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetConfigRaw
	I0205 03:12:03.354623   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .DriverName
	I0205 03:12:03.354834   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .DriverName
	I0205 03:12:03.355035   60782 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0205 03:12:03.355049   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetState
	I0205 03:12:03.356449   60782 main.go:141] libmachine: Detecting operating system of created instance...
	I0205 03:12:03.356467   60782 main.go:141] libmachine: Waiting for SSH to be available...
	I0205 03:12:03.356475   60782 main.go:141] libmachine: Getting to WaitForSSH function...
	I0205 03:12:03.356483   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHHostname
	I0205 03:12:03.359073   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:12:03.359540   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fe:dd", ip: ""} in network mk-old-k8s-version-191773: {Iface:virbr2 ExpiryTime:2025-02-05 04:11:55 +0000 UTC Type:0 Mac:52:54:00:87:fe:dd Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:old-k8s-version-191773 Clientid:01:52:54:00:87:fe:dd}
	I0205 03:12:03.359569   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined IP address 192.168.39.74 and MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:12:03.359687   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHPort
	I0205 03:12:03.359877   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHKeyPath
	I0205 03:12:03.360049   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHKeyPath
	I0205 03:12:03.360213   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHUsername
	I0205 03:12:03.360405   60782 main.go:141] libmachine: Using SSH client type: native
	I0205 03:12:03.360662   60782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.74 22 <nil> <nil>}
	I0205 03:12:03.360679   60782 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0205 03:12:03.472737   60782 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0205 03:12:03.472766   60782 main.go:141] libmachine: Detecting the provisioner...
	I0205 03:12:03.472776   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHHostname
	I0205 03:12:03.475945   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:12:03.476311   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fe:dd", ip: ""} in network mk-old-k8s-version-191773: {Iface:virbr2 ExpiryTime:2025-02-05 04:11:55 +0000 UTC Type:0 Mac:52:54:00:87:fe:dd Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:old-k8s-version-191773 Clientid:01:52:54:00:87:fe:dd}
	I0205 03:12:03.476358   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined IP address 192.168.39.74 and MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:12:03.476543   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHPort
	I0205 03:12:03.476713   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHKeyPath
	I0205 03:12:03.476880   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHKeyPath
	I0205 03:12:03.476998   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHUsername
	I0205 03:12:03.477132   60782 main.go:141] libmachine: Using SSH client type: native
	I0205 03:12:03.477312   60782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.74 22 <nil> <nil>}
	I0205 03:12:03.477322   60782 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0205 03:12:03.590041   60782 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0205 03:12:03.590123   60782 main.go:141] libmachine: found compatible host: buildroot
	I0205 03:12:03.590155   60782 main.go:141] libmachine: Provisioning with buildroot...
	I0205 03:12:03.590166   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetMachineName
	I0205 03:12:03.590415   60782 buildroot.go:166] provisioning hostname "old-k8s-version-191773"
	I0205 03:12:03.590446   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetMachineName
	I0205 03:12:03.590634   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHHostname
	I0205 03:12:03.593191   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:12:03.593513   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fe:dd", ip: ""} in network mk-old-k8s-version-191773: {Iface:virbr2 ExpiryTime:2025-02-05 04:11:55 +0000 UTC Type:0 Mac:52:54:00:87:fe:dd Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:old-k8s-version-191773 Clientid:01:52:54:00:87:fe:dd}
	I0205 03:12:03.593550   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined IP address 192.168.39.74 and MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:12:03.593722   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHPort
	I0205 03:12:03.593896   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHKeyPath
	I0205 03:12:03.594033   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHKeyPath
	I0205 03:12:03.594204   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHUsername
	I0205 03:12:03.594381   60782 main.go:141] libmachine: Using SSH client type: native
	I0205 03:12:03.594569   60782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.74 22 <nil> <nil>}
	I0205 03:12:03.594586   60782 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-191773 && echo "old-k8s-version-191773" | sudo tee /etc/hostname
	I0205 03:12:03.725290   60782 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-191773
	
	I0205 03:12:03.725319   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHHostname
	I0205 03:12:03.728056   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:12:03.728455   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fe:dd", ip: ""} in network mk-old-k8s-version-191773: {Iface:virbr2 ExpiryTime:2025-02-05 04:11:55 +0000 UTC Type:0 Mac:52:54:00:87:fe:dd Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:old-k8s-version-191773 Clientid:01:52:54:00:87:fe:dd}
	I0205 03:12:03.728488   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined IP address 192.168.39.74 and MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:12:03.728662   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHPort
	I0205 03:12:03.728841   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHKeyPath
	I0205 03:12:03.729027   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHKeyPath
	I0205 03:12:03.729161   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHUsername
	I0205 03:12:03.729282   60782 main.go:141] libmachine: Using SSH client type: native
	I0205 03:12:03.729486   60782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.74 22 <nil> <nil>}
	I0205 03:12:03.729504   60782 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-191773' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-191773/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-191773' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0205 03:12:03.846524   60782 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0205 03:12:03.846552   60782 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20363-12788/.minikube CaCertPath:/home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20363-12788/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20363-12788/.minikube}
	I0205 03:12:03.846568   60782 buildroot.go:174] setting up certificates
	I0205 03:12:03.846578   60782 provision.go:84] configureAuth start
	I0205 03:12:03.846586   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetMachineName
	I0205 03:12:03.846862   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetIP
	I0205 03:12:03.849675   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:12:03.850051   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fe:dd", ip: ""} in network mk-old-k8s-version-191773: {Iface:virbr2 ExpiryTime:2025-02-05 04:11:55 +0000 UTC Type:0 Mac:52:54:00:87:fe:dd Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:old-k8s-version-191773 Clientid:01:52:54:00:87:fe:dd}
	I0205 03:12:03.850074   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined IP address 192.168.39.74 and MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:12:03.850242   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHHostname
	I0205 03:12:03.852591   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:12:03.852930   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fe:dd", ip: ""} in network mk-old-k8s-version-191773: {Iface:virbr2 ExpiryTime:2025-02-05 04:11:55 +0000 UTC Type:0 Mac:52:54:00:87:fe:dd Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:old-k8s-version-191773 Clientid:01:52:54:00:87:fe:dd}
	I0205 03:12:03.852960   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined IP address 192.168.39.74 and MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:12:03.853114   60782 provision.go:143] copyHostCerts
	I0205 03:12:03.853193   60782 exec_runner.go:144] found /home/jenkins/minikube-integration/20363-12788/.minikube/key.pem, removing ...
	I0205 03:12:03.853210   60782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20363-12788/.minikube/key.pem
	I0205 03:12:03.853302   60782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20363-12788/.minikube/key.pem (1675 bytes)
	I0205 03:12:03.853462   60782 exec_runner.go:144] found /home/jenkins/minikube-integration/20363-12788/.minikube/ca.pem, removing ...
	I0205 03:12:03.853480   60782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20363-12788/.minikube/ca.pem
	I0205 03:12:03.853522   60782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20363-12788/.minikube/ca.pem (1082 bytes)
	I0205 03:12:03.853613   60782 exec_runner.go:144] found /home/jenkins/minikube-integration/20363-12788/.minikube/cert.pem, removing ...
	I0205 03:12:03.853622   60782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20363-12788/.minikube/cert.pem
	I0205 03:12:03.853656   60782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20363-12788/.minikube/cert.pem (1123 bytes)
	I0205 03:12:03.853738   60782 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20363-12788/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-191773 san=[127.0.0.1 192.168.39.74 localhost minikube old-k8s-version-191773]
	I0205 03:12:04.112400   60782 provision.go:177] copyRemoteCerts
	I0205 03:12:04.112464   60782 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0205 03:12:04.112486   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHHostname
	I0205 03:12:04.115099   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:12:04.115516   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fe:dd", ip: ""} in network mk-old-k8s-version-191773: {Iface:virbr2 ExpiryTime:2025-02-05 04:11:55 +0000 UTC Type:0 Mac:52:54:00:87:fe:dd Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:old-k8s-version-191773 Clientid:01:52:54:00:87:fe:dd}
	I0205 03:12:04.115552   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined IP address 192.168.39.74 and MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:12:04.115744   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHPort
	I0205 03:12:04.115931   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHKeyPath
	I0205 03:12:04.116086   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHUsername
	I0205 03:12:04.116217   60782 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/old-k8s-version-191773/id_rsa Username:docker}
	I0205 03:12:04.199570   60782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0205 03:12:04.224335   60782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0205 03:12:04.248575   60782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0205 03:12:04.272735   60782 provision.go:87] duration metric: took 426.141685ms to configureAuth
	I0205 03:12:04.272770   60782 buildroot.go:189] setting minikube options for container-runtime
	I0205 03:12:04.273005   60782 config.go:182] Loaded profile config "old-k8s-version-191773": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0205 03:12:04.273107   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHHostname
	I0205 03:12:04.275813   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:12:04.276187   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fe:dd", ip: ""} in network mk-old-k8s-version-191773: {Iface:virbr2 ExpiryTime:2025-02-05 04:11:55 +0000 UTC Type:0 Mac:52:54:00:87:fe:dd Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:old-k8s-version-191773 Clientid:01:52:54:00:87:fe:dd}
	I0205 03:12:04.276220   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined IP address 192.168.39.74 and MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:12:04.276396   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHPort
	I0205 03:12:04.276566   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHKeyPath
	I0205 03:12:04.276737   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHKeyPath
	I0205 03:12:04.276905   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHUsername
	I0205 03:12:04.277093   60782 main.go:141] libmachine: Using SSH client type: native
	I0205 03:12:04.277325   60782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.74 22 <nil> <nil>}
	I0205 03:12:04.277361   60782 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0205 03:12:04.531262   60782 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0205 03:12:04.531289   60782 main.go:141] libmachine: Checking connection to Docker...
	I0205 03:12:04.531298   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetURL
	I0205 03:12:04.532694   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | using libvirt version 6000000
	I0205 03:12:04.534913   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:12:04.535241   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fe:dd", ip: ""} in network mk-old-k8s-version-191773: {Iface:virbr2 ExpiryTime:2025-02-05 04:11:55 +0000 UTC Type:0 Mac:52:54:00:87:fe:dd Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:old-k8s-version-191773 Clientid:01:52:54:00:87:fe:dd}
	I0205 03:12:04.535276   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined IP address 192.168.39.74 and MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:12:04.535497   60782 main.go:141] libmachine: Docker is up and running!
	I0205 03:12:04.535513   60782 main.go:141] libmachine: Reticulating splines...
	I0205 03:12:04.535521   60782 client.go:171] duration metric: took 23.775490638s to LocalClient.Create
	I0205 03:12:04.535548   60782 start.go:167] duration metric: took 23.77554908s to libmachine.API.Create "old-k8s-version-191773"
	I0205 03:12:04.535561   60782 start.go:293] postStartSetup for "old-k8s-version-191773" (driver="kvm2")
	I0205 03:12:04.535574   60782 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0205 03:12:04.535598   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .DriverName
	I0205 03:12:04.535874   60782 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0205 03:12:04.535902   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHHostname
	I0205 03:12:04.538437   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:12:04.538857   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fe:dd", ip: ""} in network mk-old-k8s-version-191773: {Iface:virbr2 ExpiryTime:2025-02-05 04:11:55 +0000 UTC Type:0 Mac:52:54:00:87:fe:dd Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:old-k8s-version-191773 Clientid:01:52:54:00:87:fe:dd}
	I0205 03:12:04.538887   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined IP address 192.168.39.74 and MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:12:04.539071   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHPort
	I0205 03:12:04.539253   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHKeyPath
	I0205 03:12:04.539420   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHUsername
	I0205 03:12:04.539554   60782 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/old-k8s-version-191773/id_rsa Username:docker}
	I0205 03:12:04.625531   60782 ssh_runner.go:195] Run: cat /etc/os-release
	I0205 03:12:04.629911   60782 info.go:137] Remote host: Buildroot 2023.02.9
	I0205 03:12:04.629937   60782 filesync.go:126] Scanning /home/jenkins/minikube-integration/20363-12788/.minikube/addons for local assets ...
	I0205 03:12:04.630012   60782 filesync.go:126] Scanning /home/jenkins/minikube-integration/20363-12788/.minikube/files for local assets ...
	I0205 03:12:04.630137   60782 filesync.go:149] local asset: /home/jenkins/minikube-integration/20363-12788/.minikube/files/etc/ssl/certs/199892.pem -> 199892.pem in /etc/ssl/certs
	I0205 03:12:04.630246   60782 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0205 03:12:04.640318   60782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/files/etc/ssl/certs/199892.pem --> /etc/ssl/certs/199892.pem (1708 bytes)
	I0205 03:12:04.668366   60782 start.go:296] duration metric: took 132.790893ms for postStartSetup
	I0205 03:12:04.668451   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetConfigRaw
	I0205 03:12:04.669067   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetIP
	I0205 03:12:04.671547   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:12:04.671964   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fe:dd", ip: ""} in network mk-old-k8s-version-191773: {Iface:virbr2 ExpiryTime:2025-02-05 04:11:55 +0000 UTC Type:0 Mac:52:54:00:87:fe:dd Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:old-k8s-version-191773 Clientid:01:52:54:00:87:fe:dd}
	I0205 03:12:04.671996   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined IP address 192.168.39.74 and MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:12:04.672233   60782 profile.go:143] Saving config to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/old-k8s-version-191773/config.json ...
	I0205 03:12:04.672414   60782 start.go:128] duration metric: took 23.930370405s to createHost
	I0205 03:12:04.672438   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHHostname
	I0205 03:12:04.674942   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:12:04.675336   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fe:dd", ip: ""} in network mk-old-k8s-version-191773: {Iface:virbr2 ExpiryTime:2025-02-05 04:11:55 +0000 UTC Type:0 Mac:52:54:00:87:fe:dd Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:old-k8s-version-191773 Clientid:01:52:54:00:87:fe:dd}
	I0205 03:12:04.675368   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined IP address 192.168.39.74 and MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:12:04.675531   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHPort
	I0205 03:12:04.675723   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHKeyPath
	I0205 03:12:04.675878   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHKeyPath
	I0205 03:12:04.676018   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHUsername
	I0205 03:12:04.676213   60782 main.go:141] libmachine: Using SSH client type: native
	I0205 03:12:04.676366   60782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.74 22 <nil> <nil>}
	I0205 03:12:04.676384   60782 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0205 03:12:04.786113   60782 main.go:141] libmachine: SSH cmd err, output: <nil>: 1738725124.761891623
	
	I0205 03:12:04.786152   60782 fix.go:216] guest clock: 1738725124.761891623
	I0205 03:12:04.786163   60782 fix.go:229] Guest: 2025-02-05 03:12:04.761891623 +0000 UTC Remote: 2025-02-05 03:12:04.672424787 +0000 UTC m=+24.038269505 (delta=89.466836ms)
	I0205 03:12:04.786218   60782 fix.go:200] guest clock delta is within tolerance: 89.466836ms
	I0205 03:12:04.786225   60782 start.go:83] releasing machines lock for "old-k8s-version-191773", held for 24.044255238s
	I0205 03:12:04.786257   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .DriverName
	I0205 03:12:04.786558   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetIP
	I0205 03:12:04.789512   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:12:04.789976   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fe:dd", ip: ""} in network mk-old-k8s-version-191773: {Iface:virbr2 ExpiryTime:2025-02-05 04:11:55 +0000 UTC Type:0 Mac:52:54:00:87:fe:dd Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:old-k8s-version-191773 Clientid:01:52:54:00:87:fe:dd}
	I0205 03:12:04.790012   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined IP address 192.168.39.74 and MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:12:04.790315   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .DriverName
	I0205 03:12:04.790852   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .DriverName
	I0205 03:12:04.791063   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .DriverName
	I0205 03:12:04.791167   60782 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0205 03:12:04.791231   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHHostname
	I0205 03:12:04.791327   60782 ssh_runner.go:195] Run: cat /version.json
	I0205 03:12:04.791354   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHHostname
	I0205 03:12:04.794635   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:12:04.794913   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:12:04.795073   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fe:dd", ip: ""} in network mk-old-k8s-version-191773: {Iface:virbr2 ExpiryTime:2025-02-05 04:11:55 +0000 UTC Type:0 Mac:52:54:00:87:fe:dd Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:old-k8s-version-191773 Clientid:01:52:54:00:87:fe:dd}
	I0205 03:12:04.795129   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined IP address 192.168.39.74 and MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:12:04.795347   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHPort
	I0205 03:12:04.795427   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fe:dd", ip: ""} in network mk-old-k8s-version-191773: {Iface:virbr2 ExpiryTime:2025-02-05 04:11:55 +0000 UTC Type:0 Mac:52:54:00:87:fe:dd Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:old-k8s-version-191773 Clientid:01:52:54:00:87:fe:dd}
	I0205 03:12:04.795446   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined IP address 192.168.39.74 and MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:12:04.795493   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHKeyPath
	I0205 03:12:04.795648   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHUsername
	I0205 03:12:04.795792   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHPort
	I0205 03:12:04.795841   60782 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/old-k8s-version-191773/id_rsa Username:docker}
	I0205 03:12:04.795904   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHKeyPath
	I0205 03:12:04.795999   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHUsername
	I0205 03:12:04.796076   60782 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/old-k8s-version-191773/id_rsa Username:docker}
	I0205 03:12:04.905480   60782 ssh_runner.go:195] Run: systemctl --version
	I0205 03:12:04.911531   60782 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0205 03:12:05.078129   60782 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0205 03:12:05.086937   60782 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0205 03:12:05.087026   60782 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0205 03:12:05.104454   60782 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0205 03:12:05.104484   60782 start.go:495] detecting cgroup driver to use...
	I0205 03:12:05.104553   60782 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0205 03:12:05.121516   60782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0205 03:12:05.136907   60782 docker.go:217] disabling cri-docker service (if available) ...
	I0205 03:12:05.136969   60782 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0205 03:12:05.157674   60782 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0205 03:12:05.178457   60782 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0205 03:12:05.307688   60782 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0205 03:12:05.453913   60782 docker.go:233] disabling docker service ...
	I0205 03:12:05.454051   60782 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0205 03:12:05.471545   60782 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0205 03:12:05.485814   60782 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0205 03:12:05.624616   60782 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0205 03:12:05.755112   60782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0205 03:12:05.770747   60782 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0205 03:12:05.801491   60782 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0205 03:12:05.801568   60782 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:12:05.813204   60782 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0205 03:12:05.813289   60782 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:12:05.824826   60782 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:12:05.835923   60782 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:12:05.847500   60782 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0205 03:12:05.863830   60782 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0205 03:12:05.874701   60782 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0205 03:12:05.874770   60782 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0205 03:12:05.890432   60782 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0205 03:12:05.900598   60782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0205 03:12:06.034028   60782 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0205 03:12:06.136510   60782 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0205 03:12:06.136591   60782 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0205 03:12:06.141429   60782 start.go:563] Will wait 60s for crictl version
	I0205 03:12:06.141546   60782 ssh_runner.go:195] Run: which crictl
	I0205 03:12:06.145621   60782 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0205 03:12:06.187188   60782 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0205 03:12:06.187291   60782 ssh_runner.go:195] Run: crio --version
	I0205 03:12:06.217940   60782 ssh_runner.go:195] Run: crio --version
	I0205 03:12:06.249026   60782 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0205 03:12:06.250154   60782 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetIP
	I0205 03:12:06.252835   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:12:06.253216   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fe:dd", ip: ""} in network mk-old-k8s-version-191773: {Iface:virbr2 ExpiryTime:2025-02-05 04:11:55 +0000 UTC Type:0 Mac:52:54:00:87:fe:dd Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:old-k8s-version-191773 Clientid:01:52:54:00:87:fe:dd}
	I0205 03:12:06.253246   60782 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined IP address 192.168.39.74 and MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:12:06.253524   60782 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0205 03:12:06.257691   60782 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0205 03:12:06.269908   60782 kubeadm.go:883] updating cluster {Name:old-k8s-version-191773 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-191773 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.74 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0205 03:12:06.270047   60782 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0205 03:12:06.270111   60782 ssh_runner.go:195] Run: sudo crictl images --output json
	I0205 03:12:06.304133   60782 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0205 03:12:06.304205   60782 ssh_runner.go:195] Run: which lz4
	I0205 03:12:06.308256   60782 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0205 03:12:06.313197   60782 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0205 03:12:06.313235   60782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0205 03:12:07.861793   60782 crio.go:462] duration metric: took 1.553576305s to copy over tarball
	I0205 03:12:07.861866   60782 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0205 03:12:10.507908   60782 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.646015961s)
	I0205 03:12:10.507941   60782 crio.go:469] duration metric: took 2.646120944s to extract the tarball
	I0205 03:12:10.507951   60782 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0205 03:12:10.564807   60782 ssh_runner.go:195] Run: sudo crictl images --output json
	I0205 03:12:10.613502   60782 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0205 03:12:10.613537   60782 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0205 03:12:10.613623   60782 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0205 03:12:10.613653   60782 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0205 03:12:10.613668   60782 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0205 03:12:10.613678   60782 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0205 03:12:10.613628   60782 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0205 03:12:10.613667   60782 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0205 03:12:10.613657   60782 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0205 03:12:10.613646   60782 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0205 03:12:10.615069   60782 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0205 03:12:10.615150   60782 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0205 03:12:10.615169   60782 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0205 03:12:10.615081   60782 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0205 03:12:10.615249   60782 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0205 03:12:10.615336   60782 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0205 03:12:10.615400   60782 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0205 03:12:10.615491   60782 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0205 03:12:10.782457   60782 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0205 03:12:10.789310   60782 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0205 03:12:10.789448   60782 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0205 03:12:10.818605   60782 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0205 03:12:10.836924   60782 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0205 03:12:10.845000   60782 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0205 03:12:10.898637   60782 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0205 03:12:10.898702   60782 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0205 03:12:10.898752   60782 ssh_runner.go:195] Run: which crictl
	I0205 03:12:10.902195   60782 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0205 03:12:10.902294   60782 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0205 03:12:10.902347   60782 ssh_runner.go:195] Run: which crictl
	I0205 03:12:10.902251   60782 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0205 03:12:10.902424   60782 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0205 03:12:10.902477   60782 ssh_runner.go:195] Run: which crictl
	I0205 03:12:10.944598   60782 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0205 03:12:10.949448   60782 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0205 03:12:10.949488   60782 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0205 03:12:10.949534   60782 ssh_runner.go:195] Run: which crictl
	I0205 03:12:10.963100   60782 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0205 03:12:10.963149   60782 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0205 03:12:10.963154   60782 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0205 03:12:10.963184   60782 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0205 03:12:10.963206   60782 ssh_runner.go:195] Run: which crictl
	I0205 03:12:10.963208   60782 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0205 03:12:10.963233   60782 ssh_runner.go:195] Run: which crictl
	I0205 03:12:10.963248   60782 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0205 03:12:10.963276   60782 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0205 03:12:11.049618   60782 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0205 03:12:11.049666   60782 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0205 03:12:11.049707   60782 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0205 03:12:11.049710   60782 ssh_runner.go:195] Run: which crictl
	I0205 03:12:11.063200   60782 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0205 03:12:11.063255   60782 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0205 03:12:11.063200   60782 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0205 03:12:11.063312   60782 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0205 03:12:11.063323   60782 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0205 03:12:11.146150   60782 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0205 03:12:11.146177   60782 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0205 03:12:11.215698   60782 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0205 03:12:11.215736   60782 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0205 03:12:11.224960   60782 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0205 03:12:11.224974   60782 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0205 03:12:11.225058   60782 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0205 03:12:11.285361   60782 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0205 03:12:11.285504   60782 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0205 03:12:11.353310   60782 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20363-12788/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0205 03:12:11.357560   60782 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0205 03:12:11.383686   60782 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20363-12788/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0205 03:12:11.383769   60782 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20363-12788/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0205 03:12:11.383846   60782 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0205 03:12:11.394164   60782 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20363-12788/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0205 03:12:11.405090   60782 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0205 03:12:11.440019   60782 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20363-12788/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0205 03:12:11.454873   60782 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20363-12788/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0205 03:12:11.458015   60782 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20363-12788/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0205 03:12:11.558858   60782 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0205 03:12:11.704050   60782 cache_images.go:92] duration metric: took 1.090493153s to LoadCachedImages
	W0205 03:12:11.704163   60782 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20363-12788/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20363-12788/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0205 03:12:11.704182   60782 kubeadm.go:934] updating node { 192.168.39.74 8443 v1.20.0 crio true true} ...
	I0205 03:12:11.704333   60782 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-191773 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.74
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-191773 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0205 03:12:11.704401   60782 ssh_runner.go:195] Run: crio config
	I0205 03:12:11.753915   60782 cni.go:84] Creating CNI manager for ""
	I0205 03:12:11.753938   60782 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0205 03:12:11.753949   60782 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0205 03:12:11.753967   60782 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.74 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-191773 NodeName:old-k8s-version-191773 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.74"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.74 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0205 03:12:11.754086   60782 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.74
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-191773"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.74
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.74"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0205 03:12:11.754151   60782 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0205 03:12:11.765795   60782 binaries.go:44] Found k8s binaries, skipping transfer
	I0205 03:12:11.765896   60782 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0205 03:12:11.777280   60782 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0205 03:12:11.795595   60782 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0205 03:12:11.813470   60782 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0205 03:12:11.832154   60782 ssh_runner.go:195] Run: grep 192.168.39.74	control-plane.minikube.internal$ /etc/hosts
	I0205 03:12:11.835933   60782 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.74	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0205 03:12:11.848936   60782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0205 03:12:11.993404   60782 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0205 03:12:12.011800   60782 certs.go:68] Setting up /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/old-k8s-version-191773 for IP: 192.168.39.74
	I0205 03:12:12.011835   60782 certs.go:194] generating shared ca certs ...
	I0205 03:12:12.011858   60782 certs.go:226] acquiring lock for ca certs: {Name:mkca3cbb562bd1b517883c3b07fce8acc6cc9038 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:12:12.012062   60782 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20363-12788/.minikube/ca.key
	I0205 03:12:12.012125   60782 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20363-12788/.minikube/proxy-client-ca.key
	I0205 03:12:12.012140   60782 certs.go:256] generating profile certs ...
	I0205 03:12:12.012231   60782 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/old-k8s-version-191773/client.key
	I0205 03:12:12.012248   60782 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/old-k8s-version-191773/client.crt with IP's: []
	I0205 03:12:12.186597   60782 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/old-k8s-version-191773/client.crt ...
	I0205 03:12:12.186635   60782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/old-k8s-version-191773/client.crt: {Name:mk49efd14d2c4ebc98894b6411b8352e5b4c5e91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:12:12.186900   60782 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/old-k8s-version-191773/client.key ...
	I0205 03:12:12.186918   60782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/old-k8s-version-191773/client.key: {Name:mk2731786e8566a74967395c8f2a693bb73d2142 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:12:12.187030   60782 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/old-k8s-version-191773/apiserver.key.213c5845
	I0205 03:12:12.187050   60782 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/old-k8s-version-191773/apiserver.crt.213c5845 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.74]
	I0205 03:12:12.317837   60782 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/old-k8s-version-191773/apiserver.crt.213c5845 ...
	I0205 03:12:12.317884   60782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/old-k8s-version-191773/apiserver.crt.213c5845: {Name:mk62767e6e4268a686ae4f6dba1d3512122ff0ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:12:12.318130   60782 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/old-k8s-version-191773/apiserver.key.213c5845 ...
	I0205 03:12:12.318161   60782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/old-k8s-version-191773/apiserver.key.213c5845: {Name:mk0fa7405ead83cf80af104eb124adb57f1e8ec9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:12:12.318328   60782 certs.go:381] copying /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/old-k8s-version-191773/apiserver.crt.213c5845 -> /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/old-k8s-version-191773/apiserver.crt
	I0205 03:12:12.318470   60782 certs.go:385] copying /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/old-k8s-version-191773/apiserver.key.213c5845 -> /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/old-k8s-version-191773/apiserver.key
	I0205 03:12:12.318540   60782 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/old-k8s-version-191773/proxy-client.key
	I0205 03:12:12.318561   60782 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/old-k8s-version-191773/proxy-client.crt with IP's: []
	I0205 03:12:12.752764   60782 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/old-k8s-version-191773/proxy-client.crt ...
	I0205 03:12:12.752791   60782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/old-k8s-version-191773/proxy-client.crt: {Name:mk2adee6da8d904cf0fc1de89a215d17c2a70796 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:12:12.752953   60782 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/old-k8s-version-191773/proxy-client.key ...
	I0205 03:12:12.752965   60782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/old-k8s-version-191773/proxy-client.key: {Name:mkf3d3aea44f5798ad1ec50a31e5f53fec5cd48e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:12:12.753142   60782 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/19989.pem (1338 bytes)
	W0205 03:12:12.753181   60782 certs.go:480] ignoring /home/jenkins/minikube-integration/20363-12788/.minikube/certs/19989_empty.pem, impossibly tiny 0 bytes
	I0205 03:12:12.753191   60782 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca-key.pem (1679 bytes)
	I0205 03:12:12.753220   60782 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem (1082 bytes)
	I0205 03:12:12.753257   60782 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/cert.pem (1123 bytes)
	I0205 03:12:12.753282   60782 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/key.pem (1675 bytes)
	I0205 03:12:12.753319   60782 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/files/etc/ssl/certs/199892.pem (1708 bytes)
	I0205 03:12:12.753875   60782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0205 03:12:12.780027   60782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0205 03:12:12.803016   60782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0205 03:12:12.827463   60782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0205 03:12:12.855981   60782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/old-k8s-version-191773/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0205 03:12:12.894301   60782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/old-k8s-version-191773/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0205 03:12:12.928521   60782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/old-k8s-version-191773/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0205 03:12:12.951023   60782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/old-k8s-version-191773/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0205 03:12:12.972862   60782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0205 03:12:12.996497   60782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/certs/19989.pem --> /usr/share/ca-certificates/19989.pem (1338 bytes)
	I0205 03:12:13.020104   60782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/files/etc/ssl/certs/199892.pem --> /usr/share/ca-certificates/199892.pem (1708 bytes)
	I0205 03:12:13.042438   60782 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0205 03:12:13.059057   60782 ssh_runner.go:195] Run: openssl version
	I0205 03:12:13.065088   60782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0205 03:12:13.075992   60782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0205 03:12:13.080360   60782 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb  5 02:04 /usr/share/ca-certificates/minikubeCA.pem
	I0205 03:12:13.080423   60782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0205 03:12:13.086050   60782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0205 03:12:13.098172   60782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19989.pem && ln -fs /usr/share/ca-certificates/19989.pem /etc/ssl/certs/19989.pem"
	I0205 03:12:13.109209   60782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19989.pem
	I0205 03:12:13.113691   60782 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb  5 02:11 /usr/share/ca-certificates/19989.pem
	I0205 03:12:13.113749   60782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19989.pem
	I0205 03:12:13.119221   60782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/19989.pem /etc/ssl/certs/51391683.0"
	I0205 03:12:13.129774   60782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/199892.pem && ln -fs /usr/share/ca-certificates/199892.pem /etc/ssl/certs/199892.pem"
	I0205 03:12:13.140016   60782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/199892.pem
	I0205 03:12:13.144190   60782 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb  5 02:11 /usr/share/ca-certificates/199892.pem
	I0205 03:12:13.144250   60782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/199892.pem
	I0205 03:12:13.149954   60782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/199892.pem /etc/ssl/certs/3ec20f2e.0"
	I0205 03:12:13.161615   60782 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0205 03:12:13.165448   60782 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0205 03:12:13.165496   60782 kubeadm.go:392] StartCluster: {Name:old-k8s-version-191773 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-191773 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.74 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0205 03:12:13.165560   60782 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0205 03:12:13.165600   60782 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0205 03:12:13.203035   60782 cri.go:89] found id: ""
	I0205 03:12:13.203098   60782 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0205 03:12:13.213667   60782 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0205 03:12:13.224997   60782 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0205 03:12:13.234864   60782 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0205 03:12:13.234883   60782 kubeadm.go:157] found existing configuration files:
	
	I0205 03:12:13.234924   60782 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0205 03:12:13.243961   60782 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0205 03:12:13.244034   60782 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0205 03:12:13.253106   60782 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0205 03:12:13.262227   60782 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0205 03:12:13.262315   60782 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0205 03:12:13.271749   60782 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0205 03:12:13.280487   60782 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0205 03:12:13.280543   60782 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0205 03:12:13.290671   60782 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0205 03:12:13.299751   60782 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0205 03:12:13.299815   60782 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0205 03:12:13.309369   60782 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0205 03:12:13.426162   60782 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0205 03:12:13.426256   60782 kubeadm.go:310] [preflight] Running pre-flight checks
	I0205 03:12:13.594972   60782 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0205 03:12:13.595169   60782 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0205 03:12:13.595330   60782 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0205 03:12:13.827129   60782 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0205 03:12:14.044494   60782 out.go:235]   - Generating certificates and keys ...
	I0205 03:12:14.044638   60782 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0205 03:12:14.044747   60782 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0205 03:12:14.044894   60782 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0205 03:12:14.162900   60782 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0205 03:12:14.401545   60782 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0205 03:12:14.816155   60782 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0205 03:12:15.071858   60782 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0205 03:12:15.072071   60782 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-191773] and IPs [192.168.39.74 127.0.0.1 ::1]
	I0205 03:12:15.223663   60782 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0205 03:12:15.224178   60782 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-191773] and IPs [192.168.39.74 127.0.0.1 ::1]
	I0205 03:12:15.611954   60782 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0205 03:12:15.730142   60782 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0205 03:12:15.805509   60782 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0205 03:12:15.805952   60782 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0205 03:12:15.893294   60782 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0205 03:12:15.998614   60782 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0205 03:12:16.213085   60782 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0205 03:12:16.406193   60782 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0205 03:12:16.426712   60782 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0205 03:12:16.428908   60782 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0205 03:12:16.429005   60782 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0205 03:12:16.595620   60782 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0205 03:12:16.597537   60782 out.go:235]   - Booting up control plane ...
	I0205 03:12:16.597694   60782 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0205 03:12:16.605308   60782 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0205 03:12:16.606721   60782 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0205 03:12:16.609285   60782 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0205 03:12:16.613978   60782 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0205 03:12:56.607846   60782 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0205 03:12:56.608490   60782 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0205 03:12:56.608776   60782 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0205 03:13:01.608926   60782 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0205 03:13:01.609189   60782 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0205 03:13:11.608378   60782 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0205 03:13:11.608708   60782 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0205 03:13:31.607948   60782 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0205 03:13:31.608286   60782 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0205 03:14:11.609210   60782 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0205 03:14:11.609474   60782 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0205 03:14:11.609510   60782 kubeadm.go:310] 
	I0205 03:14:11.609581   60782 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0205 03:14:11.609650   60782 kubeadm.go:310] 		timed out waiting for the condition
	I0205 03:14:11.609660   60782 kubeadm.go:310] 
	I0205 03:14:11.609732   60782 kubeadm.go:310] 	This error is likely caused by:
	I0205 03:14:11.609793   60782 kubeadm.go:310] 		- The kubelet is not running
	I0205 03:14:11.609935   60782 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0205 03:14:11.609950   60782 kubeadm.go:310] 
	I0205 03:14:11.610074   60782 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0205 03:14:11.610144   60782 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0205 03:14:11.610195   60782 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0205 03:14:11.610205   60782 kubeadm.go:310] 
	I0205 03:14:11.610340   60782 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0205 03:14:11.610480   60782 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0205 03:14:11.610489   60782 kubeadm.go:310] 
	I0205 03:14:11.610566   60782 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0205 03:14:11.610634   60782 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0205 03:14:11.610693   60782 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0205 03:14:11.610753   60782 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0205 03:14:11.610762   60782 kubeadm.go:310] 
	I0205 03:14:11.611434   60782 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0205 03:14:11.611512   60782 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0205 03:14:11.611577   60782 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0205 03:14:11.611719   60782 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-191773] and IPs [192.168.39.74 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-191773] and IPs [192.168.39.74 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-191773] and IPs [192.168.39.74 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-191773] and IPs [192.168.39.74 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0205 03:14:11.611758   60782 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0205 03:14:12.081412   60782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0205 03:14:12.096243   60782 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0205 03:14:12.107983   60782 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0205 03:14:12.108007   60782 kubeadm.go:157] found existing configuration files:
	
	I0205 03:14:12.108064   60782 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0205 03:14:12.116793   60782 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0205 03:14:12.116846   60782 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0205 03:14:12.128376   60782 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0205 03:14:12.137247   60782 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0205 03:14:12.137315   60782 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0205 03:14:12.147851   60782 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0205 03:14:12.156915   60782 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0205 03:14:12.156982   60782 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0205 03:14:12.166352   60782 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0205 03:14:12.175035   60782 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0205 03:14:12.175101   60782 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0205 03:14:12.184344   60782 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0205 03:14:12.253322   60782 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0205 03:14:12.253431   60782 kubeadm.go:310] [preflight] Running pre-flight checks
	I0205 03:14:12.415618   60782 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0205 03:14:12.415743   60782 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0205 03:14:12.415893   60782 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0205 03:14:12.615737   60782 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0205 03:14:12.618678   60782 out.go:235]   - Generating certificates and keys ...
	I0205 03:14:12.618801   60782 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0205 03:14:12.618874   60782 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0205 03:14:12.618978   60782 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0205 03:14:12.619072   60782 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0205 03:14:12.619182   60782 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0205 03:14:12.619261   60782 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0205 03:14:12.619356   60782 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0205 03:14:12.619466   60782 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0205 03:14:12.619575   60782 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0205 03:14:12.619676   60782 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0205 03:14:12.619728   60782 kubeadm.go:310] [certs] Using the existing "sa" key
	I0205 03:14:12.619808   60782 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0205 03:14:12.835869   60782 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0205 03:14:12.952437   60782 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0205 03:14:13.076559   60782 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0205 03:14:13.474349   60782 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0205 03:14:13.498972   60782 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0205 03:14:13.499094   60782 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0205 03:14:13.499163   60782 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0205 03:14:13.644879   60782 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0205 03:14:13.646246   60782 out.go:235]   - Booting up control plane ...
	I0205 03:14:13.646401   60782 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0205 03:14:13.658451   60782 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0205 03:14:13.660351   60782 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0205 03:14:13.661585   60782 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0205 03:14:13.667893   60782 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0205 03:14:53.669895   60782 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0205 03:14:53.670234   60782 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0205 03:14:53.670427   60782 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0205 03:14:58.671339   60782 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0205 03:14:58.671704   60782 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0205 03:15:08.672313   60782 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0205 03:15:08.672597   60782 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0205 03:15:28.671893   60782 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0205 03:15:28.672164   60782 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0205 03:16:08.672460   60782 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0205 03:16:08.672681   60782 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0205 03:16:08.672702   60782 kubeadm.go:310] 
	I0205 03:16:08.672737   60782 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0205 03:16:08.672772   60782 kubeadm.go:310] 		timed out waiting for the condition
	I0205 03:16:08.672779   60782 kubeadm.go:310] 
	I0205 03:16:08.672812   60782 kubeadm.go:310] 	This error is likely caused by:
	I0205 03:16:08.672841   60782 kubeadm.go:310] 		- The kubelet is not running
	I0205 03:16:08.672933   60782 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0205 03:16:08.672952   60782 kubeadm.go:310] 
	I0205 03:16:08.673061   60782 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0205 03:16:08.673116   60782 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0205 03:16:08.673163   60782 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0205 03:16:08.673173   60782 kubeadm.go:310] 
	I0205 03:16:08.673292   60782 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0205 03:16:08.673409   60782 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0205 03:16:08.673423   60782 kubeadm.go:310] 
	I0205 03:16:08.673548   60782 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0205 03:16:08.673664   60782 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0205 03:16:08.673771   60782 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0205 03:16:08.673873   60782 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0205 03:16:08.673885   60782 kubeadm.go:310] 
	I0205 03:16:08.674391   60782 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0205 03:16:08.674535   60782 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0205 03:16:08.674655   60782 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0205 03:16:08.674725   60782 kubeadm.go:394] duration metric: took 3m55.509231187s to StartCluster
	I0205 03:16:08.674761   60782 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:16:08.674818   60782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:16:08.716336   60782 cri.go:89] found id: ""
	I0205 03:16:08.716368   60782 logs.go:282] 0 containers: []
	W0205 03:16:08.716388   60782 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:16:08.716397   60782 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:16:08.716459   60782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:16:08.749385   60782 cri.go:89] found id: ""
	I0205 03:16:08.749432   60782 logs.go:282] 0 containers: []
	W0205 03:16:08.749445   60782 logs.go:284] No container was found matching "etcd"
	I0205 03:16:08.749453   60782 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:16:08.749519   60782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:16:08.783692   60782 cri.go:89] found id: ""
	I0205 03:16:08.783718   60782 logs.go:282] 0 containers: []
	W0205 03:16:08.783725   60782 logs.go:284] No container was found matching "coredns"
	I0205 03:16:08.783731   60782 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:16:08.783785   60782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:16:08.816722   60782 cri.go:89] found id: ""
	I0205 03:16:08.816750   60782 logs.go:282] 0 containers: []
	W0205 03:16:08.816758   60782 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:16:08.816763   60782 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:16:08.816810   60782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:16:08.848568   60782 cri.go:89] found id: ""
	I0205 03:16:08.848603   60782 logs.go:282] 0 containers: []
	W0205 03:16:08.848614   60782 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:16:08.848624   60782 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:16:08.848689   60782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:16:08.880453   60782 cri.go:89] found id: ""
	I0205 03:16:08.880481   60782 logs.go:282] 0 containers: []
	W0205 03:16:08.880491   60782 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:16:08.880498   60782 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:16:08.880559   60782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:16:08.912822   60782 cri.go:89] found id: ""
	I0205 03:16:08.912855   60782 logs.go:282] 0 containers: []
	W0205 03:16:08.912865   60782 logs.go:284] No container was found matching "kindnet"
	I0205 03:16:08.912875   60782 logs.go:123] Gathering logs for kubelet ...
	I0205 03:16:08.912885   60782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:16:08.962285   60782 logs.go:123] Gathering logs for dmesg ...
	I0205 03:16:08.962321   60782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:16:08.975626   60782 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:16:08.975651   60782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:16:09.089970   60782 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:16:09.090002   60782 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:16:09.090018   60782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:16:09.196191   60782 logs.go:123] Gathering logs for container status ...
	I0205 03:16:09.196232   60782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0205 03:16:09.231206   60782 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0205 03:16:09.231258   60782 out.go:270] * 
	* 
	W0205 03:16:09.231310   60782 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0205 03:16:09.231326   60782 out.go:270] * 
	* 
	W0205 03:16:09.232151   60782 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0205 03:16:09.234851   60782 out.go:201] 
	W0205 03:16:09.235933   60782 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0205 03:16:09.235986   60782 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0205 03:16:09.236014   60782 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0205 03:16:09.237525   60782 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-191773 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-191773 -n old-k8s-version-191773
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-191773 -n old-k8s-version-191773: exit status 6 (229.204159ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0205 03:16:09.506681   63527 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-191773" does not appear in /home/jenkins/minikube-integration/20363-12788/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-191773" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (268.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-191773 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context old-k8s-version-191773 create -f testdata/busybox.yaml: exit status 1 (41.651652ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-191773" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context old-k8s-version-191773 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-191773 -n old-k8s-version-191773
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-191773 -n old-k8s-version-191773: exit status 6 (218.524433ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0205 03:16:09.767670   63568 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-191773" does not appear in /home/jenkins/minikube-integration/20363-12788/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-191773" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-191773 -n old-k8s-version-191773
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-191773 -n old-k8s-version-191773: exit status 6 (221.725678ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0205 03:16:09.988353   63598 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-191773" does not appear in /home/jenkins/minikube-integration/20363-12788/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-191773" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (109.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-191773 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0205 03:17:27.842640   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-191773 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m49.346483356s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-191773 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-191773 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-191773 describe deploy/metrics-server -n kube-system: exit status 1 (57.492817ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-191773" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-191773 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-191773 -n old-k8s-version-191773
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-191773 -n old-k8s-version-191773: exit status 6 (229.036025ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0205 03:17:59.622403   64717 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-191773" does not appear in /home/jenkins/minikube-integration/20363-12788/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-191773" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (109.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (507.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-191773 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0205 03:19:09.273897   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/functional-910650/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:21:04.764817   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-191773 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (8m25.751506108s)

                                                
                                                
-- stdout --
	* [old-k8s-version-191773] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20363
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20363-12788/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20363-12788/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-191773" primary control-plane node in "old-k8s-version-191773" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-191773" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0205 03:18:05.169187   64850 out.go:345] Setting OutFile to fd 1 ...
	I0205 03:18:05.169283   64850 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 03:18:05.169288   64850 out.go:358] Setting ErrFile to fd 2...
	I0205 03:18:05.169292   64850 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 03:18:05.169506   64850 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20363-12788/.minikube/bin
	I0205 03:18:05.170077   64850 out.go:352] Setting JSON to false
	I0205 03:18:05.171013   64850 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":7236,"bootTime":1738718249,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0205 03:18:05.171113   64850 start.go:139] virtualization: kvm guest
	I0205 03:18:05.173398   64850 out.go:177] * [old-k8s-version-191773] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0205 03:18:05.174760   64850 out.go:177]   - MINIKUBE_LOCATION=20363
	I0205 03:18:05.174826   64850 notify.go:220] Checking for updates...
	I0205 03:18:05.177156   64850 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0205 03:18:05.178410   64850 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20363-12788/kubeconfig
	I0205 03:18:05.179656   64850 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20363-12788/.minikube
	I0205 03:18:05.181016   64850 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0205 03:18:05.182297   64850 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0205 03:18:05.184009   64850 config.go:182] Loaded profile config "old-k8s-version-191773": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0205 03:18:05.184648   64850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:18:05.184710   64850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:18:05.201386   64850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37347
	I0205 03:18:05.201819   64850 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:18:05.202477   64850 main.go:141] libmachine: Using API Version  1
	I0205 03:18:05.202506   64850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:18:05.203001   64850 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:18:05.203224   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .DriverName
	I0205 03:18:05.205372   64850 out.go:177] * Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	I0205 03:18:05.206824   64850 driver.go:394] Setting default libvirt URI to qemu:///system
	I0205 03:18:05.207142   64850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:18:05.207190   64850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:18:05.222132   64850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45009
	I0205 03:18:05.222523   64850 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:18:05.222976   64850 main.go:141] libmachine: Using API Version  1
	I0205 03:18:05.223001   64850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:18:05.223312   64850 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:18:05.223502   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .DriverName
	I0205 03:18:05.261778   64850 out.go:177] * Using the kvm2 driver based on existing profile
	I0205 03:18:05.263068   64850 start.go:297] selected driver: kvm2
	I0205 03:18:05.263090   64850 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-191773 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-1
91773 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.74 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0205 03:18:05.263228   64850 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0205 03:18:05.263969   64850 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0205 03:18:05.264071   64850 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20363-12788/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0205 03:18:05.279804   64850 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0205 03:18:05.280264   64850 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0205 03:18:05.280311   64850 cni.go:84] Creating CNI manager for ""
	I0205 03:18:05.280364   64850 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0205 03:18:05.280422   64850 start.go:340] cluster config:
	{Name:old-k8s-version-191773 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-191773 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.74 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0205 03:18:05.280553   64850 iso.go:125] acquiring lock: {Name:mk486603e8d6546d81b9e7d0a893261360630790 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0205 03:18:05.282735   64850 out.go:177] * Starting "old-k8s-version-191773" primary control-plane node in "old-k8s-version-191773" cluster
	I0205 03:18:05.284135   64850 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0205 03:18:05.284191   64850 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20363-12788/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0205 03:18:05.284214   64850 cache.go:56] Caching tarball of preloaded images
	I0205 03:18:05.284318   64850 preload.go:172] Found /home/jenkins/minikube-integration/20363-12788/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0205 03:18:05.284334   64850 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0205 03:18:05.284466   64850 profile.go:143] Saving config to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/old-k8s-version-191773/config.json ...
	I0205 03:18:05.284742   64850 start.go:360] acquireMachinesLock for old-k8s-version-191773: {Name:mka859d8706e94e04a549fcebb98cdac86bfe5a9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0205 03:18:05.284814   64850 start.go:364] duration metric: took 35.545µs to acquireMachinesLock for "old-k8s-version-191773"
	I0205 03:18:05.284831   64850 start.go:96] Skipping create...Using existing machine configuration
	I0205 03:18:05.284841   64850 fix.go:54] fixHost starting: 
	I0205 03:18:05.285420   64850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:18:05.285471   64850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:18:05.301833   64850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41507
	I0205 03:18:05.302251   64850 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:18:05.302759   64850 main.go:141] libmachine: Using API Version  1
	I0205 03:18:05.302780   64850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:18:05.303110   64850 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:18:05.303364   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .DriverName
	I0205 03:18:05.303519   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetState
	I0205 03:18:05.305107   64850 fix.go:112] recreateIfNeeded on old-k8s-version-191773: state=Stopped err=<nil>
	I0205 03:18:05.305139   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .DriverName
	W0205 03:18:05.305298   64850 fix.go:138] unexpected machine state, will restart: <nil>
	I0205 03:18:05.307648   64850 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-191773" ...
	I0205 03:18:05.308723   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .Start
	I0205 03:18:05.308955   64850 main.go:141] libmachine: (old-k8s-version-191773) starting domain...
	I0205 03:18:05.308978   64850 main.go:141] libmachine: (old-k8s-version-191773) ensuring networks are active...
	I0205 03:18:05.309811   64850 main.go:141] libmachine: (old-k8s-version-191773) Ensuring network default is active
	I0205 03:18:05.310132   64850 main.go:141] libmachine: (old-k8s-version-191773) Ensuring network mk-old-k8s-version-191773 is active
	I0205 03:18:05.310537   64850 main.go:141] libmachine: (old-k8s-version-191773) getting domain XML...
	I0205 03:18:05.311358   64850 main.go:141] libmachine: (old-k8s-version-191773) creating domain...
	I0205 03:18:06.597114   64850 main.go:141] libmachine: (old-k8s-version-191773) waiting for IP...
	I0205 03:18:06.598396   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:18:06.598891   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | unable to find current IP address of domain old-k8s-version-191773 in network mk-old-k8s-version-191773
	I0205 03:18:06.598968   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | I0205 03:18:06.598876   64885 retry.go:31] will retry after 198.637892ms: waiting for domain to come up
	I0205 03:18:06.799506   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:18:06.800239   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | unable to find current IP address of domain old-k8s-version-191773 in network mk-old-k8s-version-191773
	I0205 03:18:06.800268   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | I0205 03:18:06.800206   64885 retry.go:31] will retry after 291.688294ms: waiting for domain to come up
	I0205 03:18:07.093664   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:18:07.094248   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | unable to find current IP address of domain old-k8s-version-191773 in network mk-old-k8s-version-191773
	I0205 03:18:07.094283   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | I0205 03:18:07.094233   64885 retry.go:31] will retry after 405.031989ms: waiting for domain to come up
	I0205 03:18:07.500351   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:18:07.500863   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | unable to find current IP address of domain old-k8s-version-191773 in network mk-old-k8s-version-191773
	I0205 03:18:07.500899   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | I0205 03:18:07.500822   64885 retry.go:31] will retry after 522.133089ms: waiting for domain to come up
	I0205 03:18:08.024500   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:18:08.025055   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | unable to find current IP address of domain old-k8s-version-191773 in network mk-old-k8s-version-191773
	I0205 03:18:08.025085   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | I0205 03:18:08.025015   64885 retry.go:31] will retry after 630.89054ms: waiting for domain to come up
	I0205 03:18:08.658039   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:18:08.658515   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | unable to find current IP address of domain old-k8s-version-191773 in network mk-old-k8s-version-191773
	I0205 03:18:08.658540   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | I0205 03:18:08.658490   64885 retry.go:31] will retry after 923.253557ms: waiting for domain to come up
	I0205 03:18:09.583130   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:18:09.583605   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | unable to find current IP address of domain old-k8s-version-191773 in network mk-old-k8s-version-191773
	I0205 03:18:09.583629   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | I0205 03:18:09.583582   64885 retry.go:31] will retry after 741.372501ms: waiting for domain to come up
	I0205 03:18:10.326896   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:18:10.327323   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | unable to find current IP address of domain old-k8s-version-191773 in network mk-old-k8s-version-191773
	I0205 03:18:10.327360   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | I0205 03:18:10.327308   64885 retry.go:31] will retry after 907.872783ms: waiting for domain to come up
	I0205 03:18:11.236530   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:18:11.237063   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | unable to find current IP address of domain old-k8s-version-191773 in network mk-old-k8s-version-191773
	I0205 03:18:11.237098   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | I0205 03:18:11.237014   64885 retry.go:31] will retry after 1.242660759s: waiting for domain to come up
	I0205 03:18:12.480949   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:18:12.481480   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | unable to find current IP address of domain old-k8s-version-191773 in network mk-old-k8s-version-191773
	I0205 03:18:12.481533   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | I0205 03:18:12.481466   64885 retry.go:31] will retry after 2.23324546s: waiting for domain to come up
	I0205 03:18:14.716618   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:18:14.717197   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | unable to find current IP address of domain old-k8s-version-191773 in network mk-old-k8s-version-191773
	I0205 03:18:14.717228   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | I0205 03:18:14.717151   64885 retry.go:31] will retry after 2.907344084s: waiting for domain to come up
	I0205 03:18:17.626752   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:18:17.627349   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | unable to find current IP address of domain old-k8s-version-191773 in network mk-old-k8s-version-191773
	I0205 03:18:17.627388   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | I0205 03:18:17.627315   64885 retry.go:31] will retry after 3.128007842s: waiting for domain to come up
	I0205 03:18:20.756352   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:18:20.756860   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | unable to find current IP address of domain old-k8s-version-191773 in network mk-old-k8s-version-191773
	I0205 03:18:20.756889   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | I0205 03:18:20.756820   64885 retry.go:31] will retry after 2.873797115s: waiting for domain to come up
	I0205 03:18:23.633876   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:18:23.634466   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has current primary IP address 192.168.39.74 and MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:18:23.634490   64850 main.go:141] libmachine: (old-k8s-version-191773) found domain IP: 192.168.39.74
	I0205 03:18:23.634504   64850 main.go:141] libmachine: (old-k8s-version-191773) reserving static IP address...
	I0205 03:18:23.635056   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | found host DHCP lease matching {name: "old-k8s-version-191773", mac: "52:54:00:87:fe:dd", ip: "192.168.39.74"} in network mk-old-k8s-version-191773: {Iface:virbr2 ExpiryTime:2025-02-05 04:18:16 +0000 UTC Type:0 Mac:52:54:00:87:fe:dd Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:old-k8s-version-191773 Clientid:01:52:54:00:87:fe:dd}
	I0205 03:18:23.635085   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | skip adding static IP to network mk-old-k8s-version-191773 - found existing host DHCP lease matching {name: "old-k8s-version-191773", mac: "52:54:00:87:fe:dd", ip: "192.168.39.74"}
	I0205 03:18:23.635097   64850 main.go:141] libmachine: (old-k8s-version-191773) reserved static IP address 192.168.39.74 for domain old-k8s-version-191773
	I0205 03:18:23.635117   64850 main.go:141] libmachine: (old-k8s-version-191773) waiting for SSH...
	I0205 03:18:23.635129   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | Getting to WaitForSSH function...
	I0205 03:18:23.637688   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:18:23.638053   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fe:dd", ip: ""} in network mk-old-k8s-version-191773: {Iface:virbr2 ExpiryTime:2025-02-05 04:18:16 +0000 UTC Type:0 Mac:52:54:00:87:fe:dd Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:old-k8s-version-191773 Clientid:01:52:54:00:87:fe:dd}
	I0205 03:18:23.638087   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined IP address 192.168.39.74 and MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:18:23.638237   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | Using SSH client type: external
	I0205 03:18:23.638265   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | Using SSH private key: /home/jenkins/minikube-integration/20363-12788/.minikube/machines/old-k8s-version-191773/id_rsa (-rw-------)
	I0205 03:18:23.638318   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.74 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20363-12788/.minikube/machines/old-k8s-version-191773/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0205 03:18:23.638336   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | About to run SSH command:
	I0205 03:18:23.638348   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | exit 0
	I0205 03:18:23.761231   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | SSH cmd err, output: <nil>: 
	I0205 03:18:23.761576   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetConfigRaw
	I0205 03:18:23.762279   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetIP
	I0205 03:18:23.764645   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:18:23.765090   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fe:dd", ip: ""} in network mk-old-k8s-version-191773: {Iface:virbr2 ExpiryTime:2025-02-05 04:18:16 +0000 UTC Type:0 Mac:52:54:00:87:fe:dd Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:old-k8s-version-191773 Clientid:01:52:54:00:87:fe:dd}
	I0205 03:18:23.765122   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined IP address 192.168.39.74 and MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:18:23.765387   64850 profile.go:143] Saving config to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/old-k8s-version-191773/config.json ...
	I0205 03:18:23.765594   64850 machine.go:93] provisionDockerMachine start ...
	I0205 03:18:23.765611   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .DriverName
	I0205 03:18:23.765837   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHHostname
	I0205 03:18:23.768091   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:18:23.768492   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fe:dd", ip: ""} in network mk-old-k8s-version-191773: {Iface:virbr2 ExpiryTime:2025-02-05 04:18:16 +0000 UTC Type:0 Mac:52:54:00:87:fe:dd Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:old-k8s-version-191773 Clientid:01:52:54:00:87:fe:dd}
	I0205 03:18:23.768522   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined IP address 192.168.39.74 and MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:18:23.768649   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHPort
	I0205 03:18:23.768821   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHKeyPath
	I0205 03:18:23.768976   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHKeyPath
	I0205 03:18:23.769128   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHUsername
	I0205 03:18:23.769276   64850 main.go:141] libmachine: Using SSH client type: native
	I0205 03:18:23.769482   64850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.74 22 <nil> <nil>}
	I0205 03:18:23.769493   64850 main.go:141] libmachine: About to run SSH command:
	hostname
	I0205 03:18:23.869757   64850 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0205 03:18:23.869791   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetMachineName
	I0205 03:18:23.870021   64850 buildroot.go:166] provisioning hostname "old-k8s-version-191773"
	I0205 03:18:23.870048   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetMachineName
	I0205 03:18:23.870252   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHHostname
	I0205 03:18:23.872833   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:18:23.873151   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fe:dd", ip: ""} in network mk-old-k8s-version-191773: {Iface:virbr2 ExpiryTime:2025-02-05 04:18:16 +0000 UTC Type:0 Mac:52:54:00:87:fe:dd Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:old-k8s-version-191773 Clientid:01:52:54:00:87:fe:dd}
	I0205 03:18:23.873190   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined IP address 192.168.39.74 and MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:18:23.873352   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHPort
	I0205 03:18:23.873542   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHKeyPath
	I0205 03:18:23.873711   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHKeyPath
	I0205 03:18:23.873863   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHUsername
	I0205 03:18:23.874027   64850 main.go:141] libmachine: Using SSH client type: native
	I0205 03:18:23.874218   64850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.74 22 <nil> <nil>}
	I0205 03:18:23.874235   64850 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-191773 && echo "old-k8s-version-191773" | sudo tee /etc/hostname
	I0205 03:18:23.991908   64850 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-191773
	
	I0205 03:18:23.991932   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHHostname
	I0205 03:18:23.994791   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:18:23.995068   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fe:dd", ip: ""} in network mk-old-k8s-version-191773: {Iface:virbr2 ExpiryTime:2025-02-05 04:18:16 +0000 UTC Type:0 Mac:52:54:00:87:fe:dd Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:old-k8s-version-191773 Clientid:01:52:54:00:87:fe:dd}
	I0205 03:18:23.995109   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined IP address 192.168.39.74 and MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:18:23.995268   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHPort
	I0205 03:18:23.995479   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHKeyPath
	I0205 03:18:23.995643   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHKeyPath
	I0205 03:18:23.995795   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHUsername
	I0205 03:18:23.995936   64850 main.go:141] libmachine: Using SSH client type: native
	I0205 03:18:23.996141   64850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.74 22 <nil> <nil>}
	I0205 03:18:23.996168   64850 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-191773' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-191773/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-191773' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0205 03:18:24.102900   64850 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0205 03:18:24.102947   64850 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20363-12788/.minikube CaCertPath:/home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20363-12788/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20363-12788/.minikube}
	I0205 03:18:24.103002   64850 buildroot.go:174] setting up certificates
	I0205 03:18:24.103016   64850 provision.go:84] configureAuth start
	I0205 03:18:24.103032   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetMachineName
	I0205 03:18:24.103286   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetIP
	I0205 03:18:24.105915   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:18:24.106315   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fe:dd", ip: ""} in network mk-old-k8s-version-191773: {Iface:virbr2 ExpiryTime:2025-02-05 04:18:16 +0000 UTC Type:0 Mac:52:54:00:87:fe:dd Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:old-k8s-version-191773 Clientid:01:52:54:00:87:fe:dd}
	I0205 03:18:24.106344   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined IP address 192.168.39.74 and MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:18:24.106580   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHHostname
	I0205 03:18:24.108936   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:18:24.109300   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fe:dd", ip: ""} in network mk-old-k8s-version-191773: {Iface:virbr2 ExpiryTime:2025-02-05 04:18:16 +0000 UTC Type:0 Mac:52:54:00:87:fe:dd Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:old-k8s-version-191773 Clientid:01:52:54:00:87:fe:dd}
	I0205 03:18:24.109362   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined IP address 192.168.39.74 and MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:18:24.109488   64850 provision.go:143] copyHostCerts
	I0205 03:18:24.109565   64850 exec_runner.go:144] found /home/jenkins/minikube-integration/20363-12788/.minikube/ca.pem, removing ...
	I0205 03:18:24.109580   64850 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20363-12788/.minikube/ca.pem
	I0205 03:18:24.109667   64850 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20363-12788/.minikube/ca.pem (1082 bytes)
	I0205 03:18:24.109787   64850 exec_runner.go:144] found /home/jenkins/minikube-integration/20363-12788/.minikube/cert.pem, removing ...
	I0205 03:18:24.109799   64850 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20363-12788/.minikube/cert.pem
	I0205 03:18:24.109836   64850 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20363-12788/.minikube/cert.pem (1123 bytes)
	I0205 03:18:24.109909   64850 exec_runner.go:144] found /home/jenkins/minikube-integration/20363-12788/.minikube/key.pem, removing ...
	I0205 03:18:24.109919   64850 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20363-12788/.minikube/key.pem
	I0205 03:18:24.109953   64850 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20363-12788/.minikube/key.pem (1675 bytes)
	I0205 03:18:24.110015   64850 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20363-12788/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-191773 san=[127.0.0.1 192.168.39.74 localhost minikube old-k8s-version-191773]
	I0205 03:18:24.265543   64850 provision.go:177] copyRemoteCerts
	I0205 03:18:24.265612   64850 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0205 03:18:24.265643   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHHostname
	I0205 03:18:24.268108   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:18:24.268491   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fe:dd", ip: ""} in network mk-old-k8s-version-191773: {Iface:virbr2 ExpiryTime:2025-02-05 04:18:16 +0000 UTC Type:0 Mac:52:54:00:87:fe:dd Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:old-k8s-version-191773 Clientid:01:52:54:00:87:fe:dd}
	I0205 03:18:24.268525   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined IP address 192.168.39.74 and MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:18:24.268735   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHPort
	I0205 03:18:24.268917   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHKeyPath
	I0205 03:18:24.269045   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHUsername
	I0205 03:18:24.269181   64850 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/old-k8s-version-191773/id_rsa Username:docker}
	I0205 03:18:24.355492   64850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0205 03:18:24.382304   64850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0205 03:18:24.408602   64850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0205 03:18:24.433482   64850 provision.go:87] duration metric: took 330.454496ms to configureAuth
	I0205 03:18:24.433513   64850 buildroot.go:189] setting minikube options for container-runtime
	I0205 03:18:24.433709   64850 config.go:182] Loaded profile config "old-k8s-version-191773": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0205 03:18:24.433805   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHHostname
	I0205 03:18:24.436366   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:18:24.436807   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fe:dd", ip: ""} in network mk-old-k8s-version-191773: {Iface:virbr2 ExpiryTime:2025-02-05 04:18:16 +0000 UTC Type:0 Mac:52:54:00:87:fe:dd Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:old-k8s-version-191773 Clientid:01:52:54:00:87:fe:dd}
	I0205 03:18:24.436849   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined IP address 192.168.39.74 and MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:18:24.437030   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHPort
	I0205 03:18:24.437239   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHKeyPath
	I0205 03:18:24.437462   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHKeyPath
	I0205 03:18:24.437618   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHUsername
	I0205 03:18:24.437800   64850 main.go:141] libmachine: Using SSH client type: native
	I0205 03:18:24.437964   64850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.74 22 <nil> <nil>}
	I0205 03:18:24.437983   64850 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0205 03:18:24.655043   64850 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0205 03:18:24.655093   64850 machine.go:96] duration metric: took 889.475574ms to provisionDockerMachine
	I0205 03:18:24.655111   64850 start.go:293] postStartSetup for "old-k8s-version-191773" (driver="kvm2")
	I0205 03:18:24.655151   64850 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0205 03:18:24.655178   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .DriverName
	I0205 03:18:24.655553   64850 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0205 03:18:24.655592   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHHostname
	I0205 03:18:24.658628   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:18:24.659029   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fe:dd", ip: ""} in network mk-old-k8s-version-191773: {Iface:virbr2 ExpiryTime:2025-02-05 04:18:16 +0000 UTC Type:0 Mac:52:54:00:87:fe:dd Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:old-k8s-version-191773 Clientid:01:52:54:00:87:fe:dd}
	I0205 03:18:24.659061   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined IP address 192.168.39.74 and MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:18:24.659250   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHPort
	I0205 03:18:24.659484   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHKeyPath
	I0205 03:18:24.659653   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHUsername
	I0205 03:18:24.659802   64850 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/old-k8s-version-191773/id_rsa Username:docker}
	I0205 03:18:24.740546   64850 ssh_runner.go:195] Run: cat /etc/os-release
	I0205 03:18:24.744563   64850 info.go:137] Remote host: Buildroot 2023.02.9
	I0205 03:18:24.744592   64850 filesync.go:126] Scanning /home/jenkins/minikube-integration/20363-12788/.minikube/addons for local assets ...
	I0205 03:18:24.744672   64850 filesync.go:126] Scanning /home/jenkins/minikube-integration/20363-12788/.minikube/files for local assets ...
	I0205 03:18:24.744786   64850 filesync.go:149] local asset: /home/jenkins/minikube-integration/20363-12788/.minikube/files/etc/ssl/certs/199892.pem -> 199892.pem in /etc/ssl/certs
	I0205 03:18:24.744921   64850 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0205 03:18:24.754297   64850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/files/etc/ssl/certs/199892.pem --> /etc/ssl/certs/199892.pem (1708 bytes)
	I0205 03:18:24.777769   64850 start.go:296] duration metric: took 122.615583ms for postStartSetup
	I0205 03:18:24.777817   64850 fix.go:56] duration metric: took 19.492974552s for fixHost
	I0205 03:18:24.777842   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHHostname
	I0205 03:18:24.780375   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:18:24.780717   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fe:dd", ip: ""} in network mk-old-k8s-version-191773: {Iface:virbr2 ExpiryTime:2025-02-05 04:18:16 +0000 UTC Type:0 Mac:52:54:00:87:fe:dd Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:old-k8s-version-191773 Clientid:01:52:54:00:87:fe:dd}
	I0205 03:18:24.780748   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined IP address 192.168.39.74 and MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:18:24.780921   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHPort
	I0205 03:18:24.781111   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHKeyPath
	I0205 03:18:24.781272   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHKeyPath
	I0205 03:18:24.781436   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHUsername
	I0205 03:18:24.781591   64850 main.go:141] libmachine: Using SSH client type: native
	I0205 03:18:24.781753   64850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.39.74 22 <nil> <nil>}
	I0205 03:18:24.781763   64850 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0205 03:18:24.882058   64850 main.go:141] libmachine: SSH cmd err, output: <nil>: 1738725504.854526783
	
	I0205 03:18:24.882085   64850 fix.go:216] guest clock: 1738725504.854526783
	I0205 03:18:24.882095   64850 fix.go:229] Guest: 2025-02-05 03:18:24.854526783 +0000 UTC Remote: 2025-02-05 03:18:24.777822341 +0000 UTC m=+19.649060613 (delta=76.704442ms)
	I0205 03:18:24.882150   64850 fix.go:200] guest clock delta is within tolerance: 76.704442ms
	I0205 03:18:24.882156   64850 start.go:83] releasing machines lock for "old-k8s-version-191773", held for 19.597333886s
	I0205 03:18:24.882176   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .DriverName
	I0205 03:18:24.882519   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetIP
	I0205 03:18:24.885199   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:18:24.885589   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fe:dd", ip: ""} in network mk-old-k8s-version-191773: {Iface:virbr2 ExpiryTime:2025-02-05 04:18:16 +0000 UTC Type:0 Mac:52:54:00:87:fe:dd Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:old-k8s-version-191773 Clientid:01:52:54:00:87:fe:dd}
	I0205 03:18:24.885616   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined IP address 192.168.39.74 and MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:18:24.885772   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .DriverName
	I0205 03:18:24.886252   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .DriverName
	I0205 03:18:24.886441   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .DriverName
	I0205 03:18:24.886529   64850 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0205 03:18:24.886575   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHHostname
	I0205 03:18:24.886676   64850 ssh_runner.go:195] Run: cat /version.json
	I0205 03:18:24.886694   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHHostname
	I0205 03:18:24.890170   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:18:24.890208   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:18:24.890233   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fe:dd", ip: ""} in network mk-old-k8s-version-191773: {Iface:virbr2 ExpiryTime:2025-02-05 04:18:16 +0000 UTC Type:0 Mac:52:54:00:87:fe:dd Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:old-k8s-version-191773 Clientid:01:52:54:00:87:fe:dd}
	I0205 03:18:24.890252   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined IP address 192.168.39.74 and MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:18:24.890499   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHPort
	I0205 03:18:24.890577   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fe:dd", ip: ""} in network mk-old-k8s-version-191773: {Iface:virbr2 ExpiryTime:2025-02-05 04:18:16 +0000 UTC Type:0 Mac:52:54:00:87:fe:dd Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:old-k8s-version-191773 Clientid:01:52:54:00:87:fe:dd}
	I0205 03:18:24.890608   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined IP address 192.168.39.74 and MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:18:24.890661   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHKeyPath
	I0205 03:18:24.890832   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHUsername
	I0205 03:18:24.890921   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHPort
	I0205 03:18:24.891070   64850 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/old-k8s-version-191773/id_rsa Username:docker}
	I0205 03:18:24.891200   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHKeyPath
	I0205 03:18:24.891470   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetSSHUsername
	I0205 03:18:24.891640   64850 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/old-k8s-version-191773/id_rsa Username:docker}
	I0205 03:18:24.987062   64850 ssh_runner.go:195] Run: systemctl --version
	I0205 03:18:24.992830   64850 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0205 03:18:25.138230   64850 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0205 03:18:25.143904   64850 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0205 03:18:25.143978   64850 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0205 03:18:25.161015   64850 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0205 03:18:25.161046   64850 start.go:495] detecting cgroup driver to use...
	I0205 03:18:25.161111   64850 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0205 03:18:25.177847   64850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0205 03:18:25.192089   64850 docker.go:217] disabling cri-docker service (if available) ...
	I0205 03:18:25.192157   64850 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0205 03:18:25.207004   64850 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0205 03:18:25.221470   64850 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0205 03:18:25.352602   64850 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0205 03:18:25.516431   64850 docker.go:233] disabling docker service ...
	I0205 03:18:25.516494   64850 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0205 03:18:25.529360   64850 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0205 03:18:25.540802   64850 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0205 03:18:25.652633   64850 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0205 03:18:25.761445   64850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0205 03:18:25.775871   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0205 03:18:25.792532   64850 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0205 03:18:25.792600   64850 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:18:25.801937   64850 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0205 03:18:25.802010   64850 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:18:25.811324   64850 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:18:25.821303   64850 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:18:25.831210   64850 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0205 03:18:25.840789   64850 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0205 03:18:25.849229   64850 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0205 03:18:25.849272   64850 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0205 03:18:25.861651   64850 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0205 03:18:25.870603   64850 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0205 03:18:25.988162   64850 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0205 03:18:26.084375   64850 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0205 03:18:26.084520   64850 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0205 03:18:26.089563   64850 start.go:563] Will wait 60s for crictl version
	I0205 03:18:26.089629   64850 ssh_runner.go:195] Run: which crictl
	I0205 03:18:26.093744   64850 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0205 03:18:26.134755   64850 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0205 03:18:26.134863   64850 ssh_runner.go:195] Run: crio --version
	I0205 03:18:26.165378   64850 ssh_runner.go:195] Run: crio --version
	I0205 03:18:26.195860   64850 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0205 03:18:26.197030   64850 main.go:141] libmachine: (old-k8s-version-191773) Calling .GetIP
	I0205 03:18:26.199865   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:18:26.200245   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:fe:dd", ip: ""} in network mk-old-k8s-version-191773: {Iface:virbr2 ExpiryTime:2025-02-05 04:18:16 +0000 UTC Type:0 Mac:52:54:00:87:fe:dd Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:old-k8s-version-191773 Clientid:01:52:54:00:87:fe:dd}
	I0205 03:18:26.200276   64850 main.go:141] libmachine: (old-k8s-version-191773) DBG | domain old-k8s-version-191773 has defined IP address 192.168.39.74 and MAC address 52:54:00:87:fe:dd in network mk-old-k8s-version-191773
	I0205 03:18:26.200446   64850 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0205 03:18:26.204485   64850 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0205 03:18:26.217894   64850 kubeadm.go:883] updating cluster {Name:old-k8s-version-191773 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-191773 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.74 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0205 03:18:26.218024   64850 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0205 03:18:26.218081   64850 ssh_runner.go:195] Run: sudo crictl images --output json
	I0205 03:18:26.266880   64850 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0205 03:18:26.266947   64850 ssh_runner.go:195] Run: which lz4
	I0205 03:18:26.270813   64850 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0205 03:18:26.274769   64850 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0205 03:18:26.274807   64850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0205 03:18:27.738741   64850 crio.go:462] duration metric: took 1.467960947s to copy over tarball
	I0205 03:18:27.738818   64850 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0205 03:18:30.750976   64850 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.012119888s)
	I0205 03:18:30.751009   64850 crio.go:469] duration metric: took 3.012239107s to extract the tarball
	I0205 03:18:30.751019   64850 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0205 03:18:30.791990   64850 ssh_runner.go:195] Run: sudo crictl images --output json
	I0205 03:18:30.825936   64850 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0205 03:18:30.825966   64850 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0205 03:18:30.826051   64850 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0205 03:18:30.826077   64850 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0205 03:18:30.826073   64850 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0205 03:18:30.826110   64850 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0205 03:18:30.826087   64850 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0205 03:18:30.826115   64850 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0205 03:18:30.826051   64850 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0205 03:18:30.826115   64850 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0205 03:18:30.827900   64850 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0205 03:18:30.827913   64850 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0205 03:18:30.827913   64850 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0205 03:18:30.827929   64850 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0205 03:18:30.827935   64850 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0205 03:18:30.827909   64850 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0205 03:18:30.827900   64850 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0205 03:18:30.828083   64850 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0205 03:18:30.953842   64850 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0205 03:18:30.961504   64850 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0205 03:18:30.962338   64850 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0205 03:18:30.974230   64850 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0205 03:18:30.993909   64850 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0205 03:18:31.002719   64850 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0205 03:18:31.008782   64850 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0205 03:18:31.008829   64850 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0205 03:18:31.008865   64850 ssh_runner.go:195] Run: which crictl
	I0205 03:18:31.016732   64850 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0205 03:18:31.087413   64850 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0205 03:18:31.087456   64850 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0205 03:18:31.087490   64850 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0205 03:18:31.087511   64850 ssh_runner.go:195] Run: which crictl
	I0205 03:18:31.087515   64850 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0205 03:18:31.087543   64850 ssh_runner.go:195] Run: which crictl
	I0205 03:18:31.106095   64850 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0205 03:18:31.106161   64850 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0205 03:18:31.106214   64850 ssh_runner.go:195] Run: which crictl
	I0205 03:18:31.106430   64850 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0205 03:18:31.106465   64850 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0205 03:18:31.106504   64850 ssh_runner.go:195] Run: which crictl
	I0205 03:18:31.138997   64850 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0205 03:18:31.139038   64850 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0205 03:18:31.139040   64850 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0205 03:18:31.139070   64850 ssh_runner.go:195] Run: which crictl
	I0205 03:18:31.150867   64850 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0205 03:18:31.150916   64850 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0205 03:18:31.150919   64850 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0205 03:18:31.150932   64850 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0205 03:18:31.150946   64850 ssh_runner.go:195] Run: which crictl
	I0205 03:18:31.151007   64850 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0205 03:18:31.151027   64850 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0205 03:18:31.151046   64850 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0205 03:18:31.246093   64850 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0205 03:18:31.278375   64850 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0205 03:18:31.278407   64850 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0205 03:18:31.278377   64850 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0205 03:18:31.278500   64850 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0205 03:18:31.278503   64850 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0205 03:18:31.278469   64850 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0205 03:18:31.357695   64850 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0205 03:18:31.429527   64850 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0205 03:18:31.429567   64850 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0205 03:18:31.430633   64850 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0205 03:18:31.430699   64850 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0205 03:18:31.430775   64850 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0205 03:18:31.430799   64850 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0205 03:18:31.482430   64850 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20363-12788/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0205 03:18:31.542899   64850 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20363-12788/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0205 03:18:31.549081   64850 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20363-12788/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0205 03:18:31.571327   64850 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20363-12788/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0205 03:18:31.571419   64850 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0205 03:18:31.571434   64850 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20363-12788/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0205 03:18:31.571472   64850 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20363-12788/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0205 03:18:31.601689   64850 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20363-12788/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0205 03:18:31.745170   64850 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0205 03:18:31.888856   64850 cache_images.go:92] duration metric: took 1.062869441s to LoadCachedImages
	W0205 03:18:31.888972   64850 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20363-12788/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20363-12788/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0205 03:18:31.888992   64850 kubeadm.go:934] updating node { 192.168.39.74 8443 v1.20.0 crio true true} ...
	I0205 03:18:31.889107   64850 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-191773 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.74
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-191773 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0205 03:18:31.889196   64850 ssh_runner.go:195] Run: crio config
	I0205 03:18:31.935798   64850 cni.go:84] Creating CNI manager for ""
	I0205 03:18:31.935820   64850 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0205 03:18:31.935829   64850 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0205 03:18:31.935847   64850 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.74 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-191773 NodeName:old-k8s-version-191773 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.74"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.74 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0205 03:18:31.935959   64850 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.74
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-191773"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.74
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.74"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0205 03:18:31.936016   64850 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0205 03:18:31.946347   64850 binaries.go:44] Found k8s binaries, skipping transfer
	I0205 03:18:31.946406   64850 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0205 03:18:31.955593   64850 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0205 03:18:31.971553   64850 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0205 03:18:31.987234   64850 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0205 03:18:32.004063   64850 ssh_runner.go:195] Run: grep 192.168.39.74	control-plane.minikube.internal$ /etc/hosts
	I0205 03:18:32.008342   64850 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.74	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0205 03:18:32.021046   64850 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0205 03:18:32.139141   64850 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0205 03:18:32.158079   64850 certs.go:68] Setting up /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/old-k8s-version-191773 for IP: 192.168.39.74
	I0205 03:18:32.158101   64850 certs.go:194] generating shared ca certs ...
	I0205 03:18:32.158122   64850 certs.go:226] acquiring lock for ca certs: {Name:mkca3cbb562bd1b517883c3b07fce8acc6cc9038 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:18:32.158282   64850 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20363-12788/.minikube/ca.key
	I0205 03:18:32.158338   64850 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20363-12788/.minikube/proxy-client-ca.key
	I0205 03:18:32.158353   64850 certs.go:256] generating profile certs ...
	I0205 03:18:32.158461   64850 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/old-k8s-version-191773/client.key
	I0205 03:18:32.158524   64850 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/old-k8s-version-191773/apiserver.key.213c5845
	I0205 03:18:32.158571   64850 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/old-k8s-version-191773/proxy-client.key
	I0205 03:18:32.158701   64850 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/19989.pem (1338 bytes)
	W0205 03:18:32.158731   64850 certs.go:480] ignoring /home/jenkins/minikube-integration/20363-12788/.minikube/certs/19989_empty.pem, impossibly tiny 0 bytes
	I0205 03:18:32.158746   64850 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca-key.pem (1679 bytes)
	I0205 03:18:32.158779   64850 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem (1082 bytes)
	I0205 03:18:32.158813   64850 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/cert.pem (1123 bytes)
	I0205 03:18:32.158846   64850 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/key.pem (1675 bytes)
	I0205 03:18:32.158907   64850 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/files/etc/ssl/certs/199892.pem (1708 bytes)
	I0205 03:18:32.159559   64850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0205 03:18:32.199175   64850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0205 03:18:32.237245   64850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0205 03:18:32.277374   64850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0205 03:18:32.312400   64850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/old-k8s-version-191773/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0205 03:18:32.341331   64850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/old-k8s-version-191773/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0205 03:18:32.374091   64850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/old-k8s-version-191773/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0205 03:18:32.410283   64850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/old-k8s-version-191773/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0205 03:18:32.457799   64850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/certs/19989.pem --> /usr/share/ca-certificates/19989.pem (1338 bytes)
	I0205 03:18:32.484733   64850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/files/etc/ssl/certs/199892.pem --> /usr/share/ca-certificates/199892.pem (1708 bytes)
	I0205 03:18:32.513519   64850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0205 03:18:32.546952   64850 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0205 03:18:32.569539   64850 ssh_runner.go:195] Run: openssl version
	I0205 03:18:32.576402   64850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0205 03:18:32.588130   64850 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0205 03:18:32.593982   64850 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb  5 02:04 /usr/share/ca-certificates/minikubeCA.pem
	I0205 03:18:32.594052   64850 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0205 03:18:32.600677   64850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0205 03:18:32.612618   64850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19989.pem && ln -fs /usr/share/ca-certificates/19989.pem /etc/ssl/certs/19989.pem"
	I0205 03:18:32.624179   64850 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19989.pem
	I0205 03:18:32.629689   64850 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb  5 02:11 /usr/share/ca-certificates/19989.pem
	I0205 03:18:32.629749   64850 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19989.pem
	I0205 03:18:32.635770   64850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/19989.pem /etc/ssl/certs/51391683.0"
	I0205 03:18:32.647566   64850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/199892.pem && ln -fs /usr/share/ca-certificates/199892.pem /etc/ssl/certs/199892.pem"
	I0205 03:18:32.659046   64850 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/199892.pem
	I0205 03:18:32.663897   64850 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb  5 02:11 /usr/share/ca-certificates/199892.pem
	I0205 03:18:32.663962   64850 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/199892.pem
	I0205 03:18:32.670143   64850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/199892.pem /etc/ssl/certs/3ec20f2e.0"
	I0205 03:18:32.681598   64850 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0205 03:18:32.686975   64850 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0205 03:18:32.694688   64850 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0205 03:18:32.700434   64850 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0205 03:18:32.706746   64850 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0205 03:18:32.713046   64850 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0205 03:18:32.721269   64850 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0205 03:18:32.727746   64850 kubeadm.go:392] StartCluster: {Name:old-k8s-version-191773 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-191773 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.74 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0205 03:18:32.727838   64850 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0205 03:18:32.727890   64850 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0205 03:18:32.766809   64850 cri.go:89] found id: ""
	I0205 03:18:32.766916   64850 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0205 03:18:32.778601   64850 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0205 03:18:32.778624   64850 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0205 03:18:32.778674   64850 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0205 03:18:32.792164   64850 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0205 03:18:32.793480   64850 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-191773" does not appear in /home/jenkins/minikube-integration/20363-12788/kubeconfig
	I0205 03:18:32.794172   64850 kubeconfig.go:62] /home/jenkins/minikube-integration/20363-12788/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-191773" cluster setting kubeconfig missing "old-k8s-version-191773" context setting]
	I0205 03:18:32.795456   64850 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/kubeconfig: {Name:mkb405c4292c681fd728af1af9684132d7e45754 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:18:32.836425   64850 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0205 03:18:32.850090   64850 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.74
	I0205 03:18:32.850143   64850 kubeadm.go:1160] stopping kube-system containers ...
	I0205 03:18:32.850159   64850 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0205 03:18:32.850225   64850 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0205 03:18:32.891939   64850 cri.go:89] found id: ""
	I0205 03:18:32.892011   64850 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0205 03:18:32.909281   64850 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0205 03:18:32.919227   64850 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0205 03:18:32.919250   64850 kubeadm.go:157] found existing configuration files:
	
	I0205 03:18:32.919314   64850 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0205 03:18:32.929738   64850 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0205 03:18:32.929861   64850 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0205 03:18:32.942699   64850 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0205 03:18:32.951579   64850 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0205 03:18:32.951651   64850 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0205 03:18:32.960499   64850 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0205 03:18:32.969257   64850 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0205 03:18:32.969323   64850 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0205 03:18:32.978866   64850 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0205 03:18:32.987826   64850 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0205 03:18:32.987906   64850 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0205 03:18:32.997834   64850 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0205 03:18:33.007672   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0205 03:18:33.133696   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0205 03:18:33.938738   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0205 03:18:34.200230   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0205 03:18:34.313705   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0205 03:18:34.411888   64850 api_server.go:52] waiting for apiserver process to appear ...
	I0205 03:18:34.411967   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:18:34.912731   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:18:35.413073   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:18:35.912972   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:18:36.412792   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:18:36.913007   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:18:37.412132   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:18:37.912499   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:18:38.412792   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:18:38.912097   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:18:39.412734   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:18:39.912272   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:18:40.413002   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:18:40.912793   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:18:41.412172   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:18:41.912791   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:18:42.412975   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:18:42.912320   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:18:43.412093   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:18:43.912120   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:18:44.413004   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:18:44.912909   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:18:45.412773   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:18:45.912060   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:18:46.412492   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:18:46.913153   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:18:47.412067   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:18:47.913102   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:18:48.412256   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:18:48.912050   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:18:49.412347   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:18:49.912680   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:18:50.412115   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:18:50.912812   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:18:51.412269   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:18:51.912889   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:18:52.412690   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:18:52.913037   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:18:53.412121   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:18:53.912741   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:18:54.412901   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:18:54.912242   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:18:55.412311   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:18:55.912660   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:18:56.412079   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:18:56.912351   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:18:57.412340   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:18:57.912262   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:18:58.412651   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:18:58.912769   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:18:59.412080   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:18:59.912080   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:00.412443   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:00.912994   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:01.412270   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:01.912099   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:02.412339   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:02.912914   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:03.412928   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:03.912471   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:04.412686   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:04.912551   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:05.412565   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:05.912928   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:06.412050   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:06.912137   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:07.412942   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:07.912854   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:08.412859   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:08.912670   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:09.412506   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:09.912545   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:10.412573   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:10.912475   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:11.412344   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:11.912066   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:12.412213   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:12.912444   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:13.412197   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:13.912339   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:14.412300   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:14.912550   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:15.412813   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:15.912847   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:16.412105   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:16.912245   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:17.412801   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:17.912111   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:18.412952   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:18.912280   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:19.412312   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:19.912641   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:20.412685   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:20.912149   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:21.412481   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:21.912803   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:22.412325   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:22.912366   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:23.412125   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:23.912223   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:24.412803   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:24.912796   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:25.413067   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:25.912587   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:26.412490   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:26.912214   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:27.412872   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:27.912533   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:28.412512   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:28.913006   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:29.412950   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:29.912455   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:30.412510   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:30.912629   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:31.412338   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:31.912052   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:32.412435   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:32.912369   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:33.413012   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:33.912786   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:34.413044   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:19:34.413141   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:19:34.455609   64850 cri.go:89] found id: ""
	I0205 03:19:34.455640   64850 logs.go:282] 0 containers: []
	W0205 03:19:34.455648   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:19:34.455654   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:19:34.455712   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:19:34.500757   64850 cri.go:89] found id: ""
	I0205 03:19:34.500782   64850 logs.go:282] 0 containers: []
	W0205 03:19:34.500790   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:19:34.500796   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:19:34.500854   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:19:34.537218   64850 cri.go:89] found id: ""
	I0205 03:19:34.537254   64850 logs.go:282] 0 containers: []
	W0205 03:19:34.537263   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:19:34.537269   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:19:34.537324   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:19:34.571397   64850 cri.go:89] found id: ""
	I0205 03:19:34.571426   64850 logs.go:282] 0 containers: []
	W0205 03:19:34.571433   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:19:34.571439   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:19:34.571503   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:19:34.606222   64850 cri.go:89] found id: ""
	I0205 03:19:34.606252   64850 logs.go:282] 0 containers: []
	W0205 03:19:34.606262   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:19:34.606269   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:19:34.606339   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:19:34.639611   64850 cri.go:89] found id: ""
	I0205 03:19:34.639642   64850 logs.go:282] 0 containers: []
	W0205 03:19:34.639650   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:19:34.639658   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:19:34.639740   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:19:34.673881   64850 cri.go:89] found id: ""
	I0205 03:19:34.673907   64850 logs.go:282] 0 containers: []
	W0205 03:19:34.673914   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:19:34.673920   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:19:34.673970   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:19:34.721225   64850 cri.go:89] found id: ""
	I0205 03:19:34.721258   64850 logs.go:282] 0 containers: []
	W0205 03:19:34.721270   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:19:34.721282   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:19:34.721297   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:19:34.736387   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:19:34.736429   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:19:34.860248   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:19:34.860273   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:19:34.860288   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:19:34.929175   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:19:34.929217   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:19:34.966111   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:19:34.966141   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:19:37.519478   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:37.536836   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:19:37.536901   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:19:37.573497   64850 cri.go:89] found id: ""
	I0205 03:19:37.573528   64850 logs.go:282] 0 containers: []
	W0205 03:19:37.573540   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:19:37.573547   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:19:37.573613   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:19:37.607548   64850 cri.go:89] found id: ""
	I0205 03:19:37.607583   64850 logs.go:282] 0 containers: []
	W0205 03:19:37.607594   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:19:37.607602   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:19:37.607662   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:19:37.641404   64850 cri.go:89] found id: ""
	I0205 03:19:37.641430   64850 logs.go:282] 0 containers: []
	W0205 03:19:37.641438   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:19:37.641444   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:19:37.641497   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:19:37.683932   64850 cri.go:89] found id: ""
	I0205 03:19:37.683966   64850 logs.go:282] 0 containers: []
	W0205 03:19:37.683978   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:19:37.683986   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:19:37.684060   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:19:37.734582   64850 cri.go:89] found id: ""
	I0205 03:19:37.734609   64850 logs.go:282] 0 containers: []
	W0205 03:19:37.734620   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:19:37.734627   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:19:37.734686   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:19:37.788767   64850 cri.go:89] found id: ""
	I0205 03:19:37.788799   64850 logs.go:282] 0 containers: []
	W0205 03:19:37.788810   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:19:37.788818   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:19:37.788881   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:19:37.829300   64850 cri.go:89] found id: ""
	I0205 03:19:37.829322   64850 logs.go:282] 0 containers: []
	W0205 03:19:37.829329   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:19:37.829334   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:19:37.829405   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:19:37.863098   64850 cri.go:89] found id: ""
	I0205 03:19:37.863127   64850 logs.go:282] 0 containers: []
	W0205 03:19:37.863138   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:19:37.863150   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:19:37.863162   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:19:37.902075   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:19:37.902105   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:19:37.949257   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:19:37.949285   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:19:37.962067   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:19:37.962090   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:19:38.033913   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:19:38.033933   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:19:38.033946   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:19:40.612961   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:40.625656   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:19:40.625732   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:19:40.660317   64850 cri.go:89] found id: ""
	I0205 03:19:40.660342   64850 logs.go:282] 0 containers: []
	W0205 03:19:40.660353   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:19:40.660360   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:19:40.660418   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:19:40.692871   64850 cri.go:89] found id: ""
	I0205 03:19:40.692900   64850 logs.go:282] 0 containers: []
	W0205 03:19:40.692907   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:19:40.692913   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:19:40.692961   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:19:40.725847   64850 cri.go:89] found id: ""
	I0205 03:19:40.725874   64850 logs.go:282] 0 containers: []
	W0205 03:19:40.725881   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:19:40.725886   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:19:40.725931   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:19:40.759485   64850 cri.go:89] found id: ""
	I0205 03:19:40.759516   64850 logs.go:282] 0 containers: []
	W0205 03:19:40.759527   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:19:40.759535   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:19:40.759595   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:19:40.791981   64850 cri.go:89] found id: ""
	I0205 03:19:40.792008   64850 logs.go:282] 0 containers: []
	W0205 03:19:40.792022   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:19:40.792029   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:19:40.792082   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:19:40.839548   64850 cri.go:89] found id: ""
	I0205 03:19:40.839568   64850 logs.go:282] 0 containers: []
	W0205 03:19:40.839578   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:19:40.839585   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:19:40.839642   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:19:40.874848   64850 cri.go:89] found id: ""
	I0205 03:19:40.874874   64850 logs.go:282] 0 containers: []
	W0205 03:19:40.874884   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:19:40.874891   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:19:40.874945   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:19:40.912308   64850 cri.go:89] found id: ""
	I0205 03:19:40.912337   64850 logs.go:282] 0 containers: []
	W0205 03:19:40.912347   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:19:40.912357   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:19:40.912369   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:19:40.999750   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:19:40.999778   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:19:41.038810   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:19:41.038832   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:19:41.091456   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:19:41.091489   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:19:41.105636   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:19:41.105662   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:19:41.179801   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:19:43.680449   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:43.694682   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:19:43.694755   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:19:43.729260   64850 cri.go:89] found id: ""
	I0205 03:19:43.729285   64850 logs.go:282] 0 containers: []
	W0205 03:19:43.729293   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:19:43.729298   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:19:43.729364   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:19:43.773828   64850 cri.go:89] found id: ""
	I0205 03:19:43.773853   64850 logs.go:282] 0 containers: []
	W0205 03:19:43.773860   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:19:43.773866   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:19:43.773922   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:19:43.805545   64850 cri.go:89] found id: ""
	I0205 03:19:43.805570   64850 logs.go:282] 0 containers: []
	W0205 03:19:43.805579   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:19:43.805585   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:19:43.805632   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:19:43.836564   64850 cri.go:89] found id: ""
	I0205 03:19:43.836591   64850 logs.go:282] 0 containers: []
	W0205 03:19:43.836599   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:19:43.836606   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:19:43.836667   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:19:43.867920   64850 cri.go:89] found id: ""
	I0205 03:19:43.867943   64850 logs.go:282] 0 containers: []
	W0205 03:19:43.867951   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:19:43.867956   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:19:43.868002   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:19:43.899951   64850 cri.go:89] found id: ""
	I0205 03:19:43.899980   64850 logs.go:282] 0 containers: []
	W0205 03:19:43.899988   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:19:43.899994   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:19:43.900044   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:19:43.935102   64850 cri.go:89] found id: ""
	I0205 03:19:43.935153   64850 logs.go:282] 0 containers: []
	W0205 03:19:43.935167   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:19:43.935178   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:19:43.935250   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:19:43.973746   64850 cri.go:89] found id: ""
	I0205 03:19:43.973768   64850 logs.go:282] 0 containers: []
	W0205 03:19:43.973775   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:19:43.973784   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:19:43.973797   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:19:43.987784   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:19:43.987824   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:19:44.067090   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:19:44.067120   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:19:44.067136   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:19:44.142677   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:19:44.142713   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:19:44.183165   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:19:44.183199   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:19:46.736936   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:46.751205   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:19:46.751279   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:19:46.784335   64850 cri.go:89] found id: ""
	I0205 03:19:46.784364   64850 logs.go:282] 0 containers: []
	W0205 03:19:46.784373   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:19:46.784390   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:19:46.784438   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:19:46.816875   64850 cri.go:89] found id: ""
	I0205 03:19:46.816907   64850 logs.go:282] 0 containers: []
	W0205 03:19:46.816917   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:19:46.816924   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:19:46.816987   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:19:46.851282   64850 cri.go:89] found id: ""
	I0205 03:19:46.851309   64850 logs.go:282] 0 containers: []
	W0205 03:19:46.851317   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:19:46.851323   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:19:46.851398   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:19:46.888918   64850 cri.go:89] found id: ""
	I0205 03:19:46.888948   64850 logs.go:282] 0 containers: []
	W0205 03:19:46.888960   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:19:46.888968   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:19:46.889020   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:19:46.922301   64850 cri.go:89] found id: ""
	I0205 03:19:46.922323   64850 logs.go:282] 0 containers: []
	W0205 03:19:46.922331   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:19:46.922336   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:19:46.922392   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:19:46.955229   64850 cri.go:89] found id: ""
	I0205 03:19:46.955255   64850 logs.go:282] 0 containers: []
	W0205 03:19:46.955262   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:19:46.955268   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:19:46.955316   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:19:46.988613   64850 cri.go:89] found id: ""
	I0205 03:19:46.988639   64850 logs.go:282] 0 containers: []
	W0205 03:19:46.988648   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:19:46.988655   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:19:46.988720   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:19:47.025534   64850 cri.go:89] found id: ""
	I0205 03:19:47.025559   64850 logs.go:282] 0 containers: []
	W0205 03:19:47.025567   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:19:47.025575   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:19:47.025586   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:19:47.073468   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:19:47.073501   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:19:47.088032   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:19:47.088059   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:19:47.154472   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:19:47.154495   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:19:47.154513   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:19:47.232768   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:19:47.232808   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:19:49.772382   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:49.785552   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:19:49.785620   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:19:49.823574   64850 cri.go:89] found id: ""
	I0205 03:19:49.823605   64850 logs.go:282] 0 containers: []
	W0205 03:19:49.823613   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:19:49.823619   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:19:49.823665   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:19:49.855579   64850 cri.go:89] found id: ""
	I0205 03:19:49.855603   64850 logs.go:282] 0 containers: []
	W0205 03:19:49.855611   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:19:49.855616   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:19:49.855661   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:19:49.888586   64850 cri.go:89] found id: ""
	I0205 03:19:49.888610   64850 logs.go:282] 0 containers: []
	W0205 03:19:49.888617   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:19:49.888623   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:19:49.888682   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:19:49.925547   64850 cri.go:89] found id: ""
	I0205 03:19:49.925578   64850 logs.go:282] 0 containers: []
	W0205 03:19:49.925597   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:19:49.925605   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:19:49.925670   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:19:49.958167   64850 cri.go:89] found id: ""
	I0205 03:19:49.958194   64850 logs.go:282] 0 containers: []
	W0205 03:19:49.958202   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:19:49.958207   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:19:49.958258   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:19:49.994032   64850 cri.go:89] found id: ""
	I0205 03:19:49.994058   64850 logs.go:282] 0 containers: []
	W0205 03:19:49.994066   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:19:49.994073   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:19:49.994130   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:19:50.029363   64850 cri.go:89] found id: ""
	I0205 03:19:50.029403   64850 logs.go:282] 0 containers: []
	W0205 03:19:50.029413   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:19:50.029420   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:19:50.029468   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:19:50.063511   64850 cri.go:89] found id: ""
	I0205 03:19:50.063544   64850 logs.go:282] 0 containers: []
	W0205 03:19:50.063553   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:19:50.063563   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:19:50.063576   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:19:50.114403   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:19:50.114442   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:19:50.129294   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:19:50.129323   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:19:50.207354   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:19:50.207378   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:19:50.207392   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:19:50.280110   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:19:50.280150   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:19:52.821732   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:52.834518   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:19:52.834635   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:19:52.868495   64850 cri.go:89] found id: ""
	I0205 03:19:52.868525   64850 logs.go:282] 0 containers: []
	W0205 03:19:52.868534   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:19:52.868540   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:19:52.868598   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:19:52.901224   64850 cri.go:89] found id: ""
	I0205 03:19:52.901257   64850 logs.go:282] 0 containers: []
	W0205 03:19:52.901264   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:19:52.901271   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:19:52.901348   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:19:52.937257   64850 cri.go:89] found id: ""
	I0205 03:19:52.937283   64850 logs.go:282] 0 containers: []
	W0205 03:19:52.937294   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:19:52.937301   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:19:52.937383   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:19:52.973620   64850 cri.go:89] found id: ""
	I0205 03:19:52.973647   64850 logs.go:282] 0 containers: []
	W0205 03:19:52.973655   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:19:52.973661   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:19:52.973708   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:19:53.007786   64850 cri.go:89] found id: ""
	I0205 03:19:53.007814   64850 logs.go:282] 0 containers: []
	W0205 03:19:53.007825   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:19:53.007833   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:19:53.007916   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:19:53.040111   64850 cri.go:89] found id: ""
	I0205 03:19:53.040145   64850 logs.go:282] 0 containers: []
	W0205 03:19:53.040157   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:19:53.040166   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:19:53.040226   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:19:53.073908   64850 cri.go:89] found id: ""
	I0205 03:19:53.073940   64850 logs.go:282] 0 containers: []
	W0205 03:19:53.073951   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:19:53.073958   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:19:53.074019   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:19:53.107390   64850 cri.go:89] found id: ""
	I0205 03:19:53.107418   64850 logs.go:282] 0 containers: []
	W0205 03:19:53.107426   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:19:53.107435   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:19:53.107447   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:19:53.193041   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:19:53.193088   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:19:53.233453   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:19:53.233491   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:19:53.289493   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:19:53.289527   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:19:53.306001   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:19:53.306034   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:19:53.381309   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:19:55.882243   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:55.895090   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:19:55.895161   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:19:55.929801   64850 cri.go:89] found id: ""
	I0205 03:19:55.929837   64850 logs.go:282] 0 containers: []
	W0205 03:19:55.929848   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:19:55.929873   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:19:55.929935   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:19:55.964923   64850 cri.go:89] found id: ""
	I0205 03:19:55.964950   64850 logs.go:282] 0 containers: []
	W0205 03:19:55.964960   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:19:55.964968   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:19:55.965035   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:19:56.000795   64850 cri.go:89] found id: ""
	I0205 03:19:56.000821   64850 logs.go:282] 0 containers: []
	W0205 03:19:56.000828   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:19:56.000833   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:19:56.000882   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:19:56.033265   64850 cri.go:89] found id: ""
	I0205 03:19:56.033297   64850 logs.go:282] 0 containers: []
	W0205 03:19:56.033310   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:19:56.033321   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:19:56.033410   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:19:56.068040   64850 cri.go:89] found id: ""
	I0205 03:19:56.068064   64850 logs.go:282] 0 containers: []
	W0205 03:19:56.068071   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:19:56.068077   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:19:56.068121   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:19:56.103249   64850 cri.go:89] found id: ""
	I0205 03:19:56.103279   64850 logs.go:282] 0 containers: []
	W0205 03:19:56.103290   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:19:56.103298   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:19:56.103360   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:19:56.135944   64850 cri.go:89] found id: ""
	I0205 03:19:56.135978   64850 logs.go:282] 0 containers: []
	W0205 03:19:56.135990   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:19:56.135999   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:19:56.136063   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:19:56.171894   64850 cri.go:89] found id: ""
	I0205 03:19:56.171923   64850 logs.go:282] 0 containers: []
	W0205 03:19:56.171935   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:19:56.171947   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:19:56.171960   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:19:56.224091   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:19:56.224126   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:19:56.237589   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:19:56.237620   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:19:56.316364   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:19:56.316387   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:19:56.316401   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:19:56.392383   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:19:56.392436   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:19:58.936917   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:19:58.949611   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:19:58.949725   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:19:58.988321   64850 cri.go:89] found id: ""
	I0205 03:19:58.988348   64850 logs.go:282] 0 containers: []
	W0205 03:19:58.988359   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:19:58.988368   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:19:58.988479   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:19:59.026542   64850 cri.go:89] found id: ""
	I0205 03:19:59.026568   64850 logs.go:282] 0 containers: []
	W0205 03:19:59.026576   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:19:59.026581   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:19:59.026628   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:19:59.058887   64850 cri.go:89] found id: ""
	I0205 03:19:59.058921   64850 logs.go:282] 0 containers: []
	W0205 03:19:59.058931   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:19:59.058941   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:19:59.059001   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:19:59.096917   64850 cri.go:89] found id: ""
	I0205 03:19:59.096957   64850 logs.go:282] 0 containers: []
	W0205 03:19:59.096968   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:19:59.096977   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:19:59.097044   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:19:59.134074   64850 cri.go:89] found id: ""
	I0205 03:19:59.134100   64850 logs.go:282] 0 containers: []
	W0205 03:19:59.134110   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:19:59.134118   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:19:59.134182   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:19:59.171747   64850 cri.go:89] found id: ""
	I0205 03:19:59.171785   64850 logs.go:282] 0 containers: []
	W0205 03:19:59.171796   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:19:59.171804   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:19:59.171865   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:19:59.208730   64850 cri.go:89] found id: ""
	I0205 03:19:59.208761   64850 logs.go:282] 0 containers: []
	W0205 03:19:59.208770   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:19:59.208779   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:19:59.208839   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:19:59.246836   64850 cri.go:89] found id: ""
	I0205 03:19:59.246864   64850 logs.go:282] 0 containers: []
	W0205 03:19:59.246874   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:19:59.246885   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:19:59.246901   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:19:59.337411   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:19:59.337445   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:19:59.383174   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:19:59.383202   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:19:59.433729   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:19:59.433763   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:19:59.446657   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:19:59.446682   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:19:59.516496   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:20:02.017427   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:20:02.030717   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:20:02.030789   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:20:02.066306   64850 cri.go:89] found id: ""
	I0205 03:20:02.066337   64850 logs.go:282] 0 containers: []
	W0205 03:20:02.066346   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:20:02.066354   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:20:02.066425   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:20:02.108209   64850 cri.go:89] found id: ""
	I0205 03:20:02.108235   64850 logs.go:282] 0 containers: []
	W0205 03:20:02.108242   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:20:02.108247   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:20:02.108301   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:20:02.148333   64850 cri.go:89] found id: ""
	I0205 03:20:02.148358   64850 logs.go:282] 0 containers: []
	W0205 03:20:02.148367   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:20:02.148373   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:20:02.148419   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:20:02.189756   64850 cri.go:89] found id: ""
	I0205 03:20:02.189785   64850 logs.go:282] 0 containers: []
	W0205 03:20:02.189794   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:20:02.189804   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:20:02.189871   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:20:02.227862   64850 cri.go:89] found id: ""
	I0205 03:20:02.227886   64850 logs.go:282] 0 containers: []
	W0205 03:20:02.227893   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:20:02.227899   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:20:02.227948   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:20:02.267002   64850 cri.go:89] found id: ""
	I0205 03:20:02.267030   64850 logs.go:282] 0 containers: []
	W0205 03:20:02.267041   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:20:02.267049   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:20:02.267105   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:20:02.304397   64850 cri.go:89] found id: ""
	I0205 03:20:02.304437   64850 logs.go:282] 0 containers: []
	W0205 03:20:02.304447   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:20:02.304456   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:20:02.304510   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:20:02.339050   64850 cri.go:89] found id: ""
	I0205 03:20:02.339077   64850 logs.go:282] 0 containers: []
	W0205 03:20:02.339084   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:20:02.339093   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:20:02.339105   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:20:02.403697   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:20:02.403732   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:20:02.419360   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:20:02.419388   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:20:02.508596   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:20:02.508624   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:20:02.508641   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:20:02.590591   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:20:02.590631   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:20:05.136482   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:20:05.149559   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:20:05.149639   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:20:05.184990   64850 cri.go:89] found id: ""
	I0205 03:20:05.185012   64850 logs.go:282] 0 containers: []
	W0205 03:20:05.185019   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:20:05.185025   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:20:05.185085   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:20:05.224759   64850 cri.go:89] found id: ""
	I0205 03:20:05.224783   64850 logs.go:282] 0 containers: []
	W0205 03:20:05.224790   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:20:05.224795   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:20:05.224843   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:20:05.261763   64850 cri.go:89] found id: ""
	I0205 03:20:05.261794   64850 logs.go:282] 0 containers: []
	W0205 03:20:05.261805   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:20:05.261812   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:20:05.261877   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:20:05.296643   64850 cri.go:89] found id: ""
	I0205 03:20:05.296673   64850 logs.go:282] 0 containers: []
	W0205 03:20:05.296683   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:20:05.296690   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:20:05.296752   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:20:05.329500   64850 cri.go:89] found id: ""
	I0205 03:20:05.329525   64850 logs.go:282] 0 containers: []
	W0205 03:20:05.329532   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:20:05.329538   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:20:05.329585   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:20:05.365529   64850 cri.go:89] found id: ""
	I0205 03:20:05.365557   64850 logs.go:282] 0 containers: []
	W0205 03:20:05.365565   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:20:05.365571   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:20:05.365621   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:20:05.400573   64850 cri.go:89] found id: ""
	I0205 03:20:05.400611   64850 logs.go:282] 0 containers: []
	W0205 03:20:05.400620   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:20:05.400625   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:20:05.400681   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:20:05.437259   64850 cri.go:89] found id: ""
	I0205 03:20:05.437285   64850 logs.go:282] 0 containers: []
	W0205 03:20:05.437295   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:20:05.437306   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:20:05.437319   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:20:05.513159   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:20:05.513190   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:20:05.513205   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:20:05.608660   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:20:05.608693   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:20:05.649623   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:20:05.649653   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:20:05.700890   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:20:05.700928   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:20:08.217782   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:20:08.230699   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:20:08.230771   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:20:08.262918   64850 cri.go:89] found id: ""
	I0205 03:20:08.262945   64850 logs.go:282] 0 containers: []
	W0205 03:20:08.262952   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:20:08.262958   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:20:08.263019   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:20:08.299147   64850 cri.go:89] found id: ""
	I0205 03:20:08.299188   64850 logs.go:282] 0 containers: []
	W0205 03:20:08.299200   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:20:08.299207   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:20:08.299271   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:20:08.332391   64850 cri.go:89] found id: ""
	I0205 03:20:08.332426   64850 logs.go:282] 0 containers: []
	W0205 03:20:08.332438   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:20:08.332456   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:20:08.332518   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:20:08.368707   64850 cri.go:89] found id: ""
	I0205 03:20:08.368738   64850 logs.go:282] 0 containers: []
	W0205 03:20:08.368747   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:20:08.368753   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:20:08.368806   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:20:08.404711   64850 cri.go:89] found id: ""
	I0205 03:20:08.404740   64850 logs.go:282] 0 containers: []
	W0205 03:20:08.404751   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:20:08.404760   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:20:08.404864   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:20:08.448101   64850 cri.go:89] found id: ""
	I0205 03:20:08.448124   64850 logs.go:282] 0 containers: []
	W0205 03:20:08.448132   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:20:08.448142   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:20:08.448199   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:20:08.483225   64850 cri.go:89] found id: ""
	I0205 03:20:08.483252   64850 logs.go:282] 0 containers: []
	W0205 03:20:08.483259   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:20:08.483273   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:20:08.483332   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:20:08.517285   64850 cri.go:89] found id: ""
	I0205 03:20:08.517318   64850 logs.go:282] 0 containers: []
	W0205 03:20:08.517327   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:20:08.517370   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:20:08.517387   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:20:08.590856   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:20:08.590895   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:20:08.590910   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:20:08.672330   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:20:08.672367   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:20:08.711927   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:20:08.711954   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:20:08.762623   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:20:08.762655   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:20:11.276571   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:20:11.291177   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:20:11.291251   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:20:11.328744   64850 cri.go:89] found id: ""
	I0205 03:20:11.328773   64850 logs.go:282] 0 containers: []
	W0205 03:20:11.328785   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:20:11.328792   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:20:11.328856   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:20:11.361030   64850 cri.go:89] found id: ""
	I0205 03:20:11.361066   64850 logs.go:282] 0 containers: []
	W0205 03:20:11.361077   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:20:11.361085   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:20:11.361142   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:20:11.393640   64850 cri.go:89] found id: ""
	I0205 03:20:11.393679   64850 logs.go:282] 0 containers: []
	W0205 03:20:11.393690   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:20:11.393699   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:20:11.393753   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:20:11.430954   64850 cri.go:89] found id: ""
	I0205 03:20:11.430981   64850 logs.go:282] 0 containers: []
	W0205 03:20:11.430988   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:20:11.430995   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:20:11.431054   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:20:11.465920   64850 cri.go:89] found id: ""
	I0205 03:20:11.465946   64850 logs.go:282] 0 containers: []
	W0205 03:20:11.465953   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:20:11.465959   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:20:11.466009   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:20:11.501885   64850 cri.go:89] found id: ""
	I0205 03:20:11.501918   64850 logs.go:282] 0 containers: []
	W0205 03:20:11.501930   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:20:11.501938   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:20:11.501997   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:20:11.540854   64850 cri.go:89] found id: ""
	I0205 03:20:11.540884   64850 logs.go:282] 0 containers: []
	W0205 03:20:11.540896   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:20:11.540904   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:20:11.540966   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:20:11.585681   64850 cri.go:89] found id: ""
	I0205 03:20:11.585709   64850 logs.go:282] 0 containers: []
	W0205 03:20:11.585721   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:20:11.585732   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:20:11.585746   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:20:11.633173   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:20:11.633209   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:20:11.647160   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:20:11.647183   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:20:11.722399   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:20:11.722433   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:20:11.722444   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:20:11.803231   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:20:11.803271   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:20:14.343905   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:20:14.357385   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:20:14.357447   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:20:14.393130   64850 cri.go:89] found id: ""
	I0205 03:20:14.393155   64850 logs.go:282] 0 containers: []
	W0205 03:20:14.393162   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:20:14.393168   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:20:14.393241   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:20:14.428754   64850 cri.go:89] found id: ""
	I0205 03:20:14.428781   64850 logs.go:282] 0 containers: []
	W0205 03:20:14.428789   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:20:14.428794   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:20:14.428854   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:20:14.462305   64850 cri.go:89] found id: ""
	I0205 03:20:14.462327   64850 logs.go:282] 0 containers: []
	W0205 03:20:14.462334   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:20:14.462340   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:20:14.462387   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:20:14.496434   64850 cri.go:89] found id: ""
	I0205 03:20:14.496463   64850 logs.go:282] 0 containers: []
	W0205 03:20:14.496472   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:20:14.496478   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:20:14.496537   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:20:14.531886   64850 cri.go:89] found id: ""
	I0205 03:20:14.531912   64850 logs.go:282] 0 containers: []
	W0205 03:20:14.531921   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:20:14.531927   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:20:14.531976   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:20:14.566492   64850 cri.go:89] found id: ""
	I0205 03:20:14.566519   64850 logs.go:282] 0 containers: []
	W0205 03:20:14.566526   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:20:14.566532   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:20:14.566580   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:20:14.600870   64850 cri.go:89] found id: ""
	I0205 03:20:14.600899   64850 logs.go:282] 0 containers: []
	W0205 03:20:14.600907   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:20:14.600918   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:20:14.600971   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:20:14.638889   64850 cri.go:89] found id: ""
	I0205 03:20:14.638920   64850 logs.go:282] 0 containers: []
	W0205 03:20:14.638928   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:20:14.638935   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:20:14.638945   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:20:14.676440   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:20:14.676466   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:20:14.726194   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:20:14.726230   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:20:14.740240   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:20:14.740275   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:20:14.807643   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:20:14.807663   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:20:14.807674   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:20:17.382902   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:20:17.396261   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:20:17.396326   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:20:17.431698   64850 cri.go:89] found id: ""
	I0205 03:20:17.431725   64850 logs.go:282] 0 containers: []
	W0205 03:20:17.431733   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:20:17.431739   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:20:17.431788   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:20:17.466837   64850 cri.go:89] found id: ""
	I0205 03:20:17.466863   64850 logs.go:282] 0 containers: []
	W0205 03:20:17.466873   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:20:17.466880   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:20:17.466944   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:20:17.501211   64850 cri.go:89] found id: ""
	I0205 03:20:17.501247   64850 logs.go:282] 0 containers: []
	W0205 03:20:17.501256   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:20:17.501262   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:20:17.501326   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:20:17.534872   64850 cri.go:89] found id: ""
	I0205 03:20:17.534907   64850 logs.go:282] 0 containers: []
	W0205 03:20:17.534918   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:20:17.534927   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:20:17.534993   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:20:17.571053   64850 cri.go:89] found id: ""
	I0205 03:20:17.571084   64850 logs.go:282] 0 containers: []
	W0205 03:20:17.571096   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:20:17.571104   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:20:17.571167   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:20:17.606022   64850 cri.go:89] found id: ""
	I0205 03:20:17.606051   64850 logs.go:282] 0 containers: []
	W0205 03:20:17.606059   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:20:17.606065   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:20:17.606116   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:20:17.639895   64850 cri.go:89] found id: ""
	I0205 03:20:17.639924   64850 logs.go:282] 0 containers: []
	W0205 03:20:17.639932   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:20:17.639937   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:20:17.639985   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:20:17.673712   64850 cri.go:89] found id: ""
	I0205 03:20:17.673739   64850 logs.go:282] 0 containers: []
	W0205 03:20:17.673747   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:20:17.673755   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:20:17.673766   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:20:17.687373   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:20:17.687421   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:20:17.759848   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:20:17.759875   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:20:17.759891   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:20:17.834092   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:20:17.834125   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:20:17.870094   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:20:17.870121   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:20:20.420456   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:20:20.435130   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:20:20.435212   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:20:20.473107   64850 cri.go:89] found id: ""
	I0205 03:20:20.473147   64850 logs.go:282] 0 containers: []
	W0205 03:20:20.473159   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:20:20.473166   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:20:20.473233   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:20:20.508885   64850 cri.go:89] found id: ""
	I0205 03:20:20.508911   64850 logs.go:282] 0 containers: []
	W0205 03:20:20.508918   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:20:20.508925   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:20:20.508975   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:20:20.542875   64850 cri.go:89] found id: ""
	I0205 03:20:20.542907   64850 logs.go:282] 0 containers: []
	W0205 03:20:20.542914   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:20:20.542919   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:20:20.542967   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:20:20.577992   64850 cri.go:89] found id: ""
	I0205 03:20:20.578023   64850 logs.go:282] 0 containers: []
	W0205 03:20:20.578034   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:20:20.578042   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:20:20.578102   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:20:20.612709   64850 cri.go:89] found id: ""
	I0205 03:20:20.612742   64850 logs.go:282] 0 containers: []
	W0205 03:20:20.612755   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:20:20.612763   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:20:20.612835   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:20:20.644752   64850 cri.go:89] found id: ""
	I0205 03:20:20.644786   64850 logs.go:282] 0 containers: []
	W0205 03:20:20.644796   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:20:20.644807   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:20:20.644876   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:20:20.679912   64850 cri.go:89] found id: ""
	I0205 03:20:20.679941   64850 logs.go:282] 0 containers: []
	W0205 03:20:20.679953   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:20:20.679961   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:20:20.680016   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:20:20.712069   64850 cri.go:89] found id: ""
	I0205 03:20:20.712096   64850 logs.go:282] 0 containers: []
	W0205 03:20:20.712106   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:20:20.712117   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:20:20.712164   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:20:20.724820   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:20:20.724847   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:20:20.792023   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:20:20.792048   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:20:20.792062   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:20:20.864511   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:20:20.864547   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:20:20.902003   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:20:20.902036   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:20:23.454248   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:20:23.467780   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:20:23.467856   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:20:23.504903   64850 cri.go:89] found id: ""
	I0205 03:20:23.504929   64850 logs.go:282] 0 containers: []
	W0205 03:20:23.504938   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:20:23.504946   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:20:23.505005   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:20:23.538357   64850 cri.go:89] found id: ""
	I0205 03:20:23.538385   64850 logs.go:282] 0 containers: []
	W0205 03:20:23.538395   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:20:23.538403   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:20:23.538463   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:20:23.573247   64850 cri.go:89] found id: ""
	I0205 03:20:23.573278   64850 logs.go:282] 0 containers: []
	W0205 03:20:23.573286   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:20:23.573291   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:20:23.573351   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:20:23.612059   64850 cri.go:89] found id: ""
	I0205 03:20:23.612087   64850 logs.go:282] 0 containers: []
	W0205 03:20:23.612095   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:20:23.612100   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:20:23.612147   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:20:23.647247   64850 cri.go:89] found id: ""
	I0205 03:20:23.647281   64850 logs.go:282] 0 containers: []
	W0205 03:20:23.647292   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:20:23.647300   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:20:23.647374   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:20:23.693431   64850 cri.go:89] found id: ""
	I0205 03:20:23.693473   64850 logs.go:282] 0 containers: []
	W0205 03:20:23.693482   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:20:23.693488   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:20:23.693548   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:20:23.741790   64850 cri.go:89] found id: ""
	I0205 03:20:23.741820   64850 logs.go:282] 0 containers: []
	W0205 03:20:23.741831   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:20:23.741838   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:20:23.741897   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:20:23.792180   64850 cri.go:89] found id: ""
	I0205 03:20:23.792207   64850 logs.go:282] 0 containers: []
	W0205 03:20:23.792216   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:20:23.792227   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:20:23.792239   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:20:23.842922   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:20:23.842946   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:20:23.892932   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:20:23.892969   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:20:23.909975   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:20:23.910012   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:20:23.990347   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:20:23.990398   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:20:23.990414   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:20:26.569196   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:20:26.585697   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:20:26.585766   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:20:26.620426   64850 cri.go:89] found id: ""
	I0205 03:20:26.620451   64850 logs.go:282] 0 containers: []
	W0205 03:20:26.620458   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:20:26.620464   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:20:26.620523   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:20:26.657111   64850 cri.go:89] found id: ""
	I0205 03:20:26.657144   64850 logs.go:282] 0 containers: []
	W0205 03:20:26.657151   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:20:26.657158   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:20:26.657226   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:20:26.696171   64850 cri.go:89] found id: ""
	I0205 03:20:26.696198   64850 logs.go:282] 0 containers: []
	W0205 03:20:26.696208   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:20:26.696215   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:20:26.696276   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:20:26.733359   64850 cri.go:89] found id: ""
	I0205 03:20:26.733387   64850 logs.go:282] 0 containers: []
	W0205 03:20:26.733397   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:20:26.733405   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:20:26.733470   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:20:26.772421   64850 cri.go:89] found id: ""
	I0205 03:20:26.772456   64850 logs.go:282] 0 containers: []
	W0205 03:20:26.772464   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:20:26.772471   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:20:26.772536   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:20:26.808441   64850 cri.go:89] found id: ""
	I0205 03:20:26.808483   64850 logs.go:282] 0 containers: []
	W0205 03:20:26.808494   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:20:26.808502   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:20:26.808562   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:20:26.845852   64850 cri.go:89] found id: ""
	I0205 03:20:26.845879   64850 logs.go:282] 0 containers: []
	W0205 03:20:26.845887   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:20:26.845892   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:20:26.845959   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:20:26.884071   64850 cri.go:89] found id: ""
	I0205 03:20:26.884096   64850 logs.go:282] 0 containers: []
	W0205 03:20:26.884106   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:20:26.884115   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:20:26.884125   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:20:26.939369   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:20:26.939407   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:20:26.953947   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:20:26.953977   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:20:27.032098   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:20:27.032120   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:20:27.032131   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:20:27.122128   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:20:27.122163   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:20:29.659214   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:20:29.672152   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:20:29.672246   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:20:29.704154   64850 cri.go:89] found id: ""
	I0205 03:20:29.704183   64850 logs.go:282] 0 containers: []
	W0205 03:20:29.704193   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:20:29.704201   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:20:29.704264   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:20:29.737425   64850 cri.go:89] found id: ""
	I0205 03:20:29.737462   64850 logs.go:282] 0 containers: []
	W0205 03:20:29.737473   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:20:29.737486   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:20:29.737549   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:20:29.772894   64850 cri.go:89] found id: ""
	I0205 03:20:29.772936   64850 logs.go:282] 0 containers: []
	W0205 03:20:29.772948   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:20:29.772957   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:20:29.773004   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:20:29.805416   64850 cri.go:89] found id: ""
	I0205 03:20:29.805451   64850 logs.go:282] 0 containers: []
	W0205 03:20:29.805461   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:20:29.805469   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:20:29.805538   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:20:29.838245   64850 cri.go:89] found id: ""
	I0205 03:20:29.838271   64850 logs.go:282] 0 containers: []
	W0205 03:20:29.838279   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:20:29.838290   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:20:29.838347   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:20:29.870523   64850 cri.go:89] found id: ""
	I0205 03:20:29.870547   64850 logs.go:282] 0 containers: []
	W0205 03:20:29.870554   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:20:29.870560   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:20:29.870618   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:20:29.907407   64850 cri.go:89] found id: ""
	I0205 03:20:29.907438   64850 logs.go:282] 0 containers: []
	W0205 03:20:29.907448   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:20:29.907455   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:20:29.907519   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:20:29.942233   64850 cri.go:89] found id: ""
	I0205 03:20:29.942256   64850 logs.go:282] 0 containers: []
	W0205 03:20:29.942272   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:20:29.942290   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:20:29.942301   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:20:29.996644   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:20:29.996685   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:20:30.009927   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:20:30.009957   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:20:30.088369   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:20:30.088395   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:20:30.088409   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:20:30.177019   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:20:30.177052   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:20:32.723599   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:20:32.736403   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:20:32.736482   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:20:32.770041   64850 cri.go:89] found id: ""
	I0205 03:20:32.770076   64850 logs.go:282] 0 containers: []
	W0205 03:20:32.770086   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:20:32.770095   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:20:32.770152   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:20:32.803813   64850 cri.go:89] found id: ""
	I0205 03:20:32.803849   64850 logs.go:282] 0 containers: []
	W0205 03:20:32.803859   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:20:32.803866   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:20:32.803930   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:20:32.838966   64850 cri.go:89] found id: ""
	I0205 03:20:32.838995   64850 logs.go:282] 0 containers: []
	W0205 03:20:32.839005   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:20:32.839013   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:20:32.839075   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:20:32.871709   64850 cri.go:89] found id: ""
	I0205 03:20:32.871731   64850 logs.go:282] 0 containers: []
	W0205 03:20:32.871739   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:20:32.871745   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:20:32.871795   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:20:32.905750   64850 cri.go:89] found id: ""
	I0205 03:20:32.905775   64850 logs.go:282] 0 containers: []
	W0205 03:20:32.905782   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:20:32.905788   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:20:32.905832   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:20:32.938430   64850 cri.go:89] found id: ""
	I0205 03:20:32.938462   64850 logs.go:282] 0 containers: []
	W0205 03:20:32.938472   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:20:32.938480   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:20:32.938540   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:20:32.973746   64850 cri.go:89] found id: ""
	I0205 03:20:32.973777   64850 logs.go:282] 0 containers: []
	W0205 03:20:32.973788   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:20:32.973795   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:20:32.973854   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:20:33.005761   64850 cri.go:89] found id: ""
	I0205 03:20:33.005786   64850 logs.go:282] 0 containers: []
	W0205 03:20:33.005796   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:20:33.005807   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:20:33.005821   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:20:33.071104   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:20:33.071151   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:20:33.071168   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:20:33.149735   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:20:33.149764   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:20:33.187583   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:20:33.187624   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:20:33.241853   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:20:33.241882   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:20:35.756453   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:20:35.770601   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:20:35.770677   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:20:35.803378   64850 cri.go:89] found id: ""
	I0205 03:20:35.803412   64850 logs.go:282] 0 containers: []
	W0205 03:20:35.803420   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:20:35.803426   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:20:35.803484   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:20:35.837491   64850 cri.go:89] found id: ""
	I0205 03:20:35.837519   64850 logs.go:282] 0 containers: []
	W0205 03:20:35.837530   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:20:35.837538   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:20:35.837598   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:20:35.872510   64850 cri.go:89] found id: ""
	I0205 03:20:35.872541   64850 logs.go:282] 0 containers: []
	W0205 03:20:35.872551   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:20:35.872559   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:20:35.872621   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:20:35.905548   64850 cri.go:89] found id: ""
	I0205 03:20:35.905580   64850 logs.go:282] 0 containers: []
	W0205 03:20:35.905597   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:20:35.905606   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:20:35.905658   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:20:35.937449   64850 cri.go:89] found id: ""
	I0205 03:20:35.937479   64850 logs.go:282] 0 containers: []
	W0205 03:20:35.937488   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:20:35.937495   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:20:35.937564   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:20:35.969406   64850 cri.go:89] found id: ""
	I0205 03:20:35.969442   64850 logs.go:282] 0 containers: []
	W0205 03:20:35.969454   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:20:35.969461   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:20:35.969520   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:20:36.004719   64850 cri.go:89] found id: ""
	I0205 03:20:36.004749   64850 logs.go:282] 0 containers: []
	W0205 03:20:36.004761   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:20:36.004769   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:20:36.004831   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:20:36.037449   64850 cri.go:89] found id: ""
	I0205 03:20:36.037478   64850 logs.go:282] 0 containers: []
	W0205 03:20:36.037487   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:20:36.037496   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:20:36.037508   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:20:36.073227   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:20:36.073257   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:20:36.126986   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:20:36.127022   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:20:36.140064   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:20:36.140092   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:20:36.209984   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:20:36.210009   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:20:36.210025   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:20:38.789438   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:20:38.803230   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:20:38.803300   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:20:38.839500   64850 cri.go:89] found id: ""
	I0205 03:20:38.839539   64850 logs.go:282] 0 containers: []
	W0205 03:20:38.839550   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:20:38.839559   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:20:38.839626   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:20:38.871452   64850 cri.go:89] found id: ""
	I0205 03:20:38.871478   64850 logs.go:282] 0 containers: []
	W0205 03:20:38.871485   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:20:38.871491   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:20:38.871540   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:20:38.903937   64850 cri.go:89] found id: ""
	I0205 03:20:38.903966   64850 logs.go:282] 0 containers: []
	W0205 03:20:38.903977   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:20:38.903984   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:20:38.904053   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:20:38.939205   64850 cri.go:89] found id: ""
	I0205 03:20:38.939234   64850 logs.go:282] 0 containers: []
	W0205 03:20:38.939241   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:20:38.939247   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:20:38.939293   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:20:38.971428   64850 cri.go:89] found id: ""
	I0205 03:20:38.971462   64850 logs.go:282] 0 containers: []
	W0205 03:20:38.971471   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:20:38.971484   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:20:38.971547   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:20:39.005564   64850 cri.go:89] found id: ""
	I0205 03:20:39.005594   64850 logs.go:282] 0 containers: []
	W0205 03:20:39.005604   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:20:39.005612   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:20:39.005661   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:20:39.038475   64850 cri.go:89] found id: ""
	I0205 03:20:39.038499   64850 logs.go:282] 0 containers: []
	W0205 03:20:39.038506   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:20:39.038512   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:20:39.038559   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:20:39.072619   64850 cri.go:89] found id: ""
	I0205 03:20:39.072650   64850 logs.go:282] 0 containers: []
	W0205 03:20:39.072663   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:20:39.072674   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:20:39.072688   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:20:39.120528   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:20:39.120563   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:20:39.133182   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:20:39.133207   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:20:39.205415   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:20:39.205443   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:20:39.205458   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:20:39.286674   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:20:39.286706   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:20:41.823771   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:20:41.836974   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:20:41.837056   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:20:41.871023   64850 cri.go:89] found id: ""
	I0205 03:20:41.871062   64850 logs.go:282] 0 containers: []
	W0205 03:20:41.871074   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:20:41.871083   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:20:41.871154   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:20:41.906703   64850 cri.go:89] found id: ""
	I0205 03:20:41.906737   64850 logs.go:282] 0 containers: []
	W0205 03:20:41.906748   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:20:41.906756   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:20:41.906813   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:20:41.938986   64850 cri.go:89] found id: ""
	I0205 03:20:41.939010   64850 logs.go:282] 0 containers: []
	W0205 03:20:41.939018   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:20:41.939023   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:20:41.939086   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:20:41.972448   64850 cri.go:89] found id: ""
	I0205 03:20:41.972478   64850 logs.go:282] 0 containers: []
	W0205 03:20:41.972487   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:20:41.972493   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:20:41.972541   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:20:42.007002   64850 cri.go:89] found id: ""
	I0205 03:20:42.007030   64850 logs.go:282] 0 containers: []
	W0205 03:20:42.007037   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:20:42.007043   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:20:42.007090   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:20:42.039497   64850 cri.go:89] found id: ""
	I0205 03:20:42.039526   64850 logs.go:282] 0 containers: []
	W0205 03:20:42.039535   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:20:42.039542   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:20:42.039607   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:20:42.071942   64850 cri.go:89] found id: ""
	I0205 03:20:42.071970   64850 logs.go:282] 0 containers: []
	W0205 03:20:42.071981   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:20:42.071988   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:20:42.072052   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:20:42.103609   64850 cri.go:89] found id: ""
	I0205 03:20:42.103644   64850 logs.go:282] 0 containers: []
	W0205 03:20:42.103655   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:20:42.103667   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:20:42.103681   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:20:42.117081   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:20:42.117109   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:20:42.185757   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:20:42.185783   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:20:42.185797   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:20:42.261286   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:20:42.261323   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:20:42.308856   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:20:42.308884   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:20:44.864719   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:20:44.880540   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:20:44.880592   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:20:44.913494   64850 cri.go:89] found id: ""
	I0205 03:20:44.913521   64850 logs.go:282] 0 containers: []
	W0205 03:20:44.913528   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:20:44.913535   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:20:44.913583   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:20:44.950189   64850 cri.go:89] found id: ""
	I0205 03:20:44.950222   64850 logs.go:282] 0 containers: []
	W0205 03:20:44.950232   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:20:44.950239   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:20:44.950287   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:20:44.988215   64850 cri.go:89] found id: ""
	I0205 03:20:44.988245   64850 logs.go:282] 0 containers: []
	W0205 03:20:44.988255   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:20:44.988263   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:20:44.988328   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:20:45.021530   64850 cri.go:89] found id: ""
	I0205 03:20:45.021562   64850 logs.go:282] 0 containers: []
	W0205 03:20:45.021572   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:20:45.021579   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:20:45.021627   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:20:45.056769   64850 cri.go:89] found id: ""
	I0205 03:20:45.056794   64850 logs.go:282] 0 containers: []
	W0205 03:20:45.056802   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:20:45.056807   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:20:45.056863   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:20:45.089904   64850 cri.go:89] found id: ""
	I0205 03:20:45.089934   64850 logs.go:282] 0 containers: []
	W0205 03:20:45.089948   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:20:45.089956   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:20:45.090025   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:20:45.124437   64850 cri.go:89] found id: ""
	I0205 03:20:45.124471   64850 logs.go:282] 0 containers: []
	W0205 03:20:45.124482   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:20:45.124490   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:20:45.124553   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:20:45.160096   64850 cri.go:89] found id: ""
	I0205 03:20:45.160123   64850 logs.go:282] 0 containers: []
	W0205 03:20:45.160130   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:20:45.160139   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:20:45.160152   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:20:45.215232   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:20:45.215269   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:20:45.228597   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:20:45.228625   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:20:45.301903   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:20:45.301939   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:20:45.301957   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:20:45.386475   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:20:45.386515   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:20:47.933468   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:20:47.946360   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:20:47.946443   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:20:47.980664   64850 cri.go:89] found id: ""
	I0205 03:20:47.980690   64850 logs.go:282] 0 containers: []
	W0205 03:20:47.980698   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:20:47.980705   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:20:47.980767   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:20:48.016049   64850 cri.go:89] found id: ""
	I0205 03:20:48.016079   64850 logs.go:282] 0 containers: []
	W0205 03:20:48.016090   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:20:48.016112   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:20:48.016193   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:20:48.052540   64850 cri.go:89] found id: ""
	I0205 03:20:48.052563   64850 logs.go:282] 0 containers: []
	W0205 03:20:48.052571   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:20:48.052576   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:20:48.052632   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:20:48.086326   64850 cri.go:89] found id: ""
	I0205 03:20:48.086368   64850 logs.go:282] 0 containers: []
	W0205 03:20:48.086378   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:20:48.086385   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:20:48.086447   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:20:48.126605   64850 cri.go:89] found id: ""
	I0205 03:20:48.126641   64850 logs.go:282] 0 containers: []
	W0205 03:20:48.126651   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:20:48.126656   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:20:48.126705   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:20:48.160613   64850 cri.go:89] found id: ""
	I0205 03:20:48.160643   64850 logs.go:282] 0 containers: []
	W0205 03:20:48.160653   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:20:48.160660   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:20:48.160719   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:20:48.197506   64850 cri.go:89] found id: ""
	I0205 03:20:48.197536   64850 logs.go:282] 0 containers: []
	W0205 03:20:48.197547   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:20:48.197554   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:20:48.197614   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:20:48.231487   64850 cri.go:89] found id: ""
	I0205 03:20:48.231510   64850 logs.go:282] 0 containers: []
	W0205 03:20:48.231519   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:20:48.231529   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:20:48.231564   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:20:48.283486   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:20:48.283532   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:20:48.297235   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:20:48.297269   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:20:48.377130   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:20:48.377156   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:20:48.377170   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:20:48.459045   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:20:48.459081   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:20:50.998765   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:20:51.011730   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:20:51.011805   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:20:51.048817   64850 cri.go:89] found id: ""
	I0205 03:20:51.048845   64850 logs.go:282] 0 containers: []
	W0205 03:20:51.048853   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:20:51.048858   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:20:51.048905   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:20:51.084604   64850 cri.go:89] found id: ""
	I0205 03:20:51.084632   64850 logs.go:282] 0 containers: []
	W0205 03:20:51.084641   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:20:51.084661   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:20:51.084733   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:20:51.122665   64850 cri.go:89] found id: ""
	I0205 03:20:51.122691   64850 logs.go:282] 0 containers: []
	W0205 03:20:51.122700   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:20:51.122707   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:20:51.122769   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:20:51.157411   64850 cri.go:89] found id: ""
	I0205 03:20:51.157441   64850 logs.go:282] 0 containers: []
	W0205 03:20:51.157457   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:20:51.157465   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:20:51.157527   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:20:51.190569   64850 cri.go:89] found id: ""
	I0205 03:20:51.190598   64850 logs.go:282] 0 containers: []
	W0205 03:20:51.190609   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:20:51.190617   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:20:51.190676   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:20:51.223570   64850 cri.go:89] found id: ""
	I0205 03:20:51.223597   64850 logs.go:282] 0 containers: []
	W0205 03:20:51.223604   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:20:51.223611   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:20:51.223666   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:20:51.258247   64850 cri.go:89] found id: ""
	I0205 03:20:51.258281   64850 logs.go:282] 0 containers: []
	W0205 03:20:51.258292   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:20:51.258299   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:20:51.258361   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:20:51.297730   64850 cri.go:89] found id: ""
	I0205 03:20:51.297768   64850 logs.go:282] 0 containers: []
	W0205 03:20:51.297784   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:20:51.297799   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:20:51.297819   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:20:51.350141   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:20:51.350177   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:20:51.363307   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:20:51.363332   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:20:51.432013   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:20:51.432046   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:20:51.432060   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:20:51.507166   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:20:51.507206   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:20:54.045480   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:20:54.058782   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:20:54.058865   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:20:54.091559   64850 cri.go:89] found id: ""
	I0205 03:20:54.091592   64850 logs.go:282] 0 containers: []
	W0205 03:20:54.091603   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:20:54.091610   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:20:54.091674   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:20:54.125773   64850 cri.go:89] found id: ""
	I0205 03:20:54.125805   64850 logs.go:282] 0 containers: []
	W0205 03:20:54.125815   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:20:54.125822   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:20:54.125887   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:20:54.160094   64850 cri.go:89] found id: ""
	I0205 03:20:54.160118   64850 logs.go:282] 0 containers: []
	W0205 03:20:54.160126   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:20:54.160131   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:20:54.160186   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:20:54.193566   64850 cri.go:89] found id: ""
	I0205 03:20:54.193597   64850 logs.go:282] 0 containers: []
	W0205 03:20:54.193607   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:20:54.193615   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:20:54.193676   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:20:54.227535   64850 cri.go:89] found id: ""
	I0205 03:20:54.227559   64850 logs.go:282] 0 containers: []
	W0205 03:20:54.227570   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:20:54.227577   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:20:54.227639   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:20:54.260926   64850 cri.go:89] found id: ""
	I0205 03:20:54.260952   64850 logs.go:282] 0 containers: []
	W0205 03:20:54.260962   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:20:54.260969   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:20:54.261032   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:20:54.294241   64850 cri.go:89] found id: ""
	I0205 03:20:54.294271   64850 logs.go:282] 0 containers: []
	W0205 03:20:54.294282   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:20:54.294290   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:20:54.294349   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:20:54.331525   64850 cri.go:89] found id: ""
	I0205 03:20:54.331552   64850 logs.go:282] 0 containers: []
	W0205 03:20:54.331560   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:20:54.331568   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:20:54.331580   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:20:54.383899   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:20:54.383945   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:20:54.397596   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:20:54.397628   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:20:54.464214   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:20:54.464240   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:20:54.464255   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:20:54.536470   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:20:54.536506   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:20:57.075478   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:20:57.088327   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:20:57.088396   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:20:57.123682   64850 cri.go:89] found id: ""
	I0205 03:20:57.123765   64850 logs.go:282] 0 containers: []
	W0205 03:20:57.123782   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:20:57.123799   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:20:57.123859   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:20:57.156322   64850 cri.go:89] found id: ""
	I0205 03:20:57.156355   64850 logs.go:282] 0 containers: []
	W0205 03:20:57.156363   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:20:57.156368   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:20:57.156419   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:20:57.190112   64850 cri.go:89] found id: ""
	I0205 03:20:57.190150   64850 logs.go:282] 0 containers: []
	W0205 03:20:57.190161   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:20:57.190170   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:20:57.190237   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:20:57.225121   64850 cri.go:89] found id: ""
	I0205 03:20:57.225151   64850 logs.go:282] 0 containers: []
	W0205 03:20:57.225162   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:20:57.225170   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:20:57.225235   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:20:57.258123   64850 cri.go:89] found id: ""
	I0205 03:20:57.258151   64850 logs.go:282] 0 containers: []
	W0205 03:20:57.258161   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:20:57.258168   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:20:57.258227   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:20:57.293465   64850 cri.go:89] found id: ""
	I0205 03:20:57.293495   64850 logs.go:282] 0 containers: []
	W0205 03:20:57.293505   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:20:57.293511   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:20:57.293557   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:20:57.327634   64850 cri.go:89] found id: ""
	I0205 03:20:57.327661   64850 logs.go:282] 0 containers: []
	W0205 03:20:57.327670   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:20:57.327678   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:20:57.327745   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:20:57.361450   64850 cri.go:89] found id: ""
	I0205 03:20:57.361482   64850 logs.go:282] 0 containers: []
	W0205 03:20:57.361494   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:20:57.361505   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:20:57.361520   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:20:57.375080   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:20:57.375112   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:20:57.445957   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:20:57.445987   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:20:57.446002   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:20:57.519900   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:20:57.519940   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:20:57.573997   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:20:57.574036   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:21:00.128876   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:21:00.141836   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:21:00.141910   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:21:00.180071   64850 cri.go:89] found id: ""
	I0205 03:21:00.180104   64850 logs.go:282] 0 containers: []
	W0205 03:21:00.180114   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:21:00.180120   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:21:00.180178   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:21:00.218792   64850 cri.go:89] found id: ""
	I0205 03:21:00.218826   64850 logs.go:282] 0 containers: []
	W0205 03:21:00.218837   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:21:00.218845   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:21:00.218927   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:21:00.257238   64850 cri.go:89] found id: ""
	I0205 03:21:00.257263   64850 logs.go:282] 0 containers: []
	W0205 03:21:00.257270   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:21:00.257276   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:21:00.257325   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:21:00.298928   64850 cri.go:89] found id: ""
	I0205 03:21:00.298949   64850 logs.go:282] 0 containers: []
	W0205 03:21:00.298958   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:21:00.298965   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:21:00.299015   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:21:00.342665   64850 cri.go:89] found id: ""
	I0205 03:21:00.342689   64850 logs.go:282] 0 containers: []
	W0205 03:21:00.342699   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:21:00.342706   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:21:00.342760   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:21:00.383420   64850 cri.go:89] found id: ""
	I0205 03:21:00.383447   64850 logs.go:282] 0 containers: []
	W0205 03:21:00.383459   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:21:00.383470   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:21:00.383522   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:21:00.421587   64850 cri.go:89] found id: ""
	I0205 03:21:00.421614   64850 logs.go:282] 0 containers: []
	W0205 03:21:00.421625   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:21:00.421637   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:21:00.421700   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:21:00.455599   64850 cri.go:89] found id: ""
	I0205 03:21:00.455633   64850 logs.go:282] 0 containers: []
	W0205 03:21:00.455644   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:21:00.455657   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:21:00.455670   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:21:00.505100   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:21:00.505135   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:21:00.521248   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:21:00.521292   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:21:00.592335   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:21:00.592366   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:21:00.592381   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:21:00.667933   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:21:00.667969   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:21:03.214223   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:21:03.228045   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:21:03.228105   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:21:03.265163   64850 cri.go:89] found id: ""
	I0205 03:21:03.265202   64850 logs.go:282] 0 containers: []
	W0205 03:21:03.265213   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:21:03.265220   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:21:03.265280   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:21:03.314718   64850 cri.go:89] found id: ""
	I0205 03:21:03.314745   64850 logs.go:282] 0 containers: []
	W0205 03:21:03.314757   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:21:03.314762   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:21:03.314825   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:21:03.349231   64850 cri.go:89] found id: ""
	I0205 03:21:03.349257   64850 logs.go:282] 0 containers: []
	W0205 03:21:03.349267   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:21:03.349274   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:21:03.349350   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:21:03.382510   64850 cri.go:89] found id: ""
	I0205 03:21:03.382539   64850 logs.go:282] 0 containers: []
	W0205 03:21:03.382550   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:21:03.382557   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:21:03.382618   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:21:03.417002   64850 cri.go:89] found id: ""
	I0205 03:21:03.417028   64850 logs.go:282] 0 containers: []
	W0205 03:21:03.417038   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:21:03.417044   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:21:03.417107   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:21:03.449506   64850 cri.go:89] found id: ""
	I0205 03:21:03.449533   64850 logs.go:282] 0 containers: []
	W0205 03:21:03.449543   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:21:03.449551   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:21:03.449605   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:21:03.483277   64850 cri.go:89] found id: ""
	I0205 03:21:03.483307   64850 logs.go:282] 0 containers: []
	W0205 03:21:03.483315   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:21:03.483320   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:21:03.483373   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:21:03.515871   64850 cri.go:89] found id: ""
	I0205 03:21:03.515902   64850 logs.go:282] 0 containers: []
	W0205 03:21:03.515912   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:21:03.515922   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:21:03.515932   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:21:03.565380   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:21:03.565414   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:21:03.582691   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:21:03.582732   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:21:03.659845   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:21:03.659870   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:21:03.659881   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:21:03.739180   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:21:03.739231   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:21:06.278915   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:21:06.292624   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:21:06.292714   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:21:06.326096   64850 cri.go:89] found id: ""
	I0205 03:21:06.326130   64850 logs.go:282] 0 containers: []
	W0205 03:21:06.326141   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:21:06.326149   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:21:06.326209   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:21:06.358089   64850 cri.go:89] found id: ""
	I0205 03:21:06.358114   64850 logs.go:282] 0 containers: []
	W0205 03:21:06.358122   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:21:06.358135   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:21:06.358184   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:21:06.391029   64850 cri.go:89] found id: ""
	I0205 03:21:06.391054   64850 logs.go:282] 0 containers: []
	W0205 03:21:06.391062   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:21:06.391070   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:21:06.391132   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:21:06.422571   64850 cri.go:89] found id: ""
	I0205 03:21:06.422600   64850 logs.go:282] 0 containers: []
	W0205 03:21:06.422611   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:21:06.422619   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:21:06.422672   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:21:06.464058   64850 cri.go:89] found id: ""
	I0205 03:21:06.464082   64850 logs.go:282] 0 containers: []
	W0205 03:21:06.464089   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:21:06.464095   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:21:06.464144   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:21:06.495410   64850 cri.go:89] found id: ""
	I0205 03:21:06.495443   64850 logs.go:282] 0 containers: []
	W0205 03:21:06.495453   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:21:06.495461   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:21:06.495513   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:21:06.528316   64850 cri.go:89] found id: ""
	I0205 03:21:06.528338   64850 logs.go:282] 0 containers: []
	W0205 03:21:06.528345   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:21:06.528351   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:21:06.528403   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:21:06.558387   64850 cri.go:89] found id: ""
	I0205 03:21:06.558418   64850 logs.go:282] 0 containers: []
	W0205 03:21:06.558426   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:21:06.558435   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:21:06.558449   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:21:06.631653   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:21:06.631699   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:21:06.671956   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:21:06.671982   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:21:06.722790   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:21:06.722827   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:21:06.736091   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:21:06.736124   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:21:06.799706   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:21:09.301374   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:21:09.319721   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:21:09.319795   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:21:09.357108   64850 cri.go:89] found id: ""
	I0205 03:21:09.357141   64850 logs.go:282] 0 containers: []
	W0205 03:21:09.357152   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:21:09.357158   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:21:09.357241   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:21:09.393116   64850 cri.go:89] found id: ""
	I0205 03:21:09.393153   64850 logs.go:282] 0 containers: []
	W0205 03:21:09.393163   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:21:09.393171   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:21:09.393230   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:21:09.433530   64850 cri.go:89] found id: ""
	I0205 03:21:09.433560   64850 logs.go:282] 0 containers: []
	W0205 03:21:09.433569   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:21:09.433575   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:21:09.433624   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:21:09.469017   64850 cri.go:89] found id: ""
	I0205 03:21:09.469059   64850 logs.go:282] 0 containers: []
	W0205 03:21:09.469071   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:21:09.469080   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:21:09.469163   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:21:09.503894   64850 cri.go:89] found id: ""
	I0205 03:21:09.503926   64850 logs.go:282] 0 containers: []
	W0205 03:21:09.503934   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:21:09.503939   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:21:09.504006   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:21:09.543611   64850 cri.go:89] found id: ""
	I0205 03:21:09.543652   64850 logs.go:282] 0 containers: []
	W0205 03:21:09.543664   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:21:09.543672   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:21:09.543730   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:21:09.578577   64850 cri.go:89] found id: ""
	I0205 03:21:09.578604   64850 logs.go:282] 0 containers: []
	W0205 03:21:09.578611   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:21:09.578617   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:21:09.578664   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:21:09.619251   64850 cri.go:89] found id: ""
	I0205 03:21:09.619279   64850 logs.go:282] 0 containers: []
	W0205 03:21:09.619288   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:21:09.619299   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:21:09.619313   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:21:09.662555   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:21:09.662581   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:21:09.719698   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:21:09.719742   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:21:09.739727   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:21:09.739765   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:21:09.834235   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:21:09.834260   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:21:09.834286   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:21:12.406003   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:21:12.421492   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:21:12.421592   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:21:12.460923   64850 cri.go:89] found id: ""
	I0205 03:21:12.460950   64850 logs.go:282] 0 containers: []
	W0205 03:21:12.460959   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:21:12.460965   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:21:12.461002   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:21:12.502176   64850 cri.go:89] found id: ""
	I0205 03:21:12.502196   64850 logs.go:282] 0 containers: []
	W0205 03:21:12.502203   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:21:12.502208   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:21:12.502244   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:21:12.542206   64850 cri.go:89] found id: ""
	I0205 03:21:12.542233   64850 logs.go:282] 0 containers: []
	W0205 03:21:12.542240   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:21:12.542246   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:21:12.542292   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:21:12.580380   64850 cri.go:89] found id: ""
	I0205 03:21:12.580414   64850 logs.go:282] 0 containers: []
	W0205 03:21:12.580422   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:21:12.580428   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:21:12.580474   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:21:12.618053   64850 cri.go:89] found id: ""
	I0205 03:21:12.618075   64850 logs.go:282] 0 containers: []
	W0205 03:21:12.618086   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:21:12.618093   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:21:12.618139   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:21:12.653230   64850 cri.go:89] found id: ""
	I0205 03:21:12.653258   64850 logs.go:282] 0 containers: []
	W0205 03:21:12.653265   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:21:12.653271   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:21:12.653323   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:21:12.687566   64850 cri.go:89] found id: ""
	I0205 03:21:12.687593   64850 logs.go:282] 0 containers: []
	W0205 03:21:12.687604   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:21:12.687612   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:21:12.687673   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:21:12.723048   64850 cri.go:89] found id: ""
	I0205 03:21:12.723072   64850 logs.go:282] 0 containers: []
	W0205 03:21:12.723082   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:21:12.723093   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:21:12.723106   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:21:12.775392   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:21:12.775421   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:21:12.788111   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:21:12.788137   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:21:12.857490   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:21:12.857509   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:21:12.857521   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:21:12.931407   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:21:12.931447   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:21:15.467272   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:21:15.483255   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:21:15.483330   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:21:15.527631   64850 cri.go:89] found id: ""
	I0205 03:21:15.527659   64850 logs.go:282] 0 containers: []
	W0205 03:21:15.527668   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:21:15.527674   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:21:15.527724   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:21:15.593127   64850 cri.go:89] found id: ""
	I0205 03:21:15.593171   64850 logs.go:282] 0 containers: []
	W0205 03:21:15.593179   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:21:15.593185   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:21:15.593238   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:21:15.625778   64850 cri.go:89] found id: ""
	I0205 03:21:15.625809   64850 logs.go:282] 0 containers: []
	W0205 03:21:15.625820   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:21:15.625827   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:21:15.625888   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:21:15.662488   64850 cri.go:89] found id: ""
	I0205 03:21:15.662514   64850 logs.go:282] 0 containers: []
	W0205 03:21:15.662522   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:21:15.662530   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:21:15.662585   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:21:15.695544   64850 cri.go:89] found id: ""
	I0205 03:21:15.695568   64850 logs.go:282] 0 containers: []
	W0205 03:21:15.695582   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:21:15.695590   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:21:15.695654   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:21:15.730236   64850 cri.go:89] found id: ""
	I0205 03:21:15.730258   64850 logs.go:282] 0 containers: []
	W0205 03:21:15.730265   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:21:15.730270   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:21:15.730313   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:21:15.765217   64850 cri.go:89] found id: ""
	I0205 03:21:15.765240   64850 logs.go:282] 0 containers: []
	W0205 03:21:15.765246   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:21:15.765252   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:21:15.765298   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:21:15.795842   64850 cri.go:89] found id: ""
	I0205 03:21:15.795872   64850 logs.go:282] 0 containers: []
	W0205 03:21:15.795883   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:21:15.795894   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:21:15.795909   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:21:15.844292   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:21:15.844326   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:21:15.857074   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:21:15.857098   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:21:15.921745   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:21:15.921767   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:21:15.921778   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:21:15.992460   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:21:15.992491   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:21:18.529499   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:21:18.542999   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:21:18.543076   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:21:18.578953   64850 cri.go:89] found id: ""
	I0205 03:21:18.578986   64850 logs.go:282] 0 containers: []
	W0205 03:21:18.578997   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:21:18.579005   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:21:18.579064   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:21:18.612528   64850 cri.go:89] found id: ""
	I0205 03:21:18.612557   64850 logs.go:282] 0 containers: []
	W0205 03:21:18.612565   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:21:18.612571   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:21:18.612618   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:21:18.645087   64850 cri.go:89] found id: ""
	I0205 03:21:18.645117   64850 logs.go:282] 0 containers: []
	W0205 03:21:18.645127   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:21:18.645134   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:21:18.645196   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:21:18.679849   64850 cri.go:89] found id: ""
	I0205 03:21:18.679873   64850 logs.go:282] 0 containers: []
	W0205 03:21:18.679880   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:21:18.679886   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:21:18.679939   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:21:18.711973   64850 cri.go:89] found id: ""
	I0205 03:21:18.712005   64850 logs.go:282] 0 containers: []
	W0205 03:21:18.712016   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:21:18.712027   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:21:18.712090   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:21:18.749160   64850 cri.go:89] found id: ""
	I0205 03:21:18.749193   64850 logs.go:282] 0 containers: []
	W0205 03:21:18.749201   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:21:18.749208   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:21:18.749278   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:21:18.784295   64850 cri.go:89] found id: ""
	I0205 03:21:18.784318   64850 logs.go:282] 0 containers: []
	W0205 03:21:18.784326   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:21:18.784331   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:21:18.784374   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:21:18.822469   64850 cri.go:89] found id: ""
	I0205 03:21:18.822505   64850 logs.go:282] 0 containers: []
	W0205 03:21:18.822517   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:21:18.822528   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:21:18.822541   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:21:18.858653   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:21:18.858687   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:21:18.909725   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:21:18.909762   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:21:18.922837   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:21:18.922864   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:21:18.997554   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:21:18.997581   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:21:18.997595   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:21:21.569468   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:21:21.587621   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:21:21.587703   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:21:21.626836   64850 cri.go:89] found id: ""
	I0205 03:21:21.626872   64850 logs.go:282] 0 containers: []
	W0205 03:21:21.626885   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:21:21.626893   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:21:21.626956   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:21:21.660098   64850 cri.go:89] found id: ""
	I0205 03:21:21.660137   64850 logs.go:282] 0 containers: []
	W0205 03:21:21.660146   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:21:21.660153   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:21:21.660213   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:21:21.695618   64850 cri.go:89] found id: ""
	I0205 03:21:21.695645   64850 logs.go:282] 0 containers: []
	W0205 03:21:21.695655   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:21:21.695662   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:21:21.695719   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:21:21.729774   64850 cri.go:89] found id: ""
	I0205 03:21:21.729805   64850 logs.go:282] 0 containers: []
	W0205 03:21:21.729817   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:21:21.729825   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:21:21.729881   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:21:21.776153   64850 cri.go:89] found id: ""
	I0205 03:21:21.776184   64850 logs.go:282] 0 containers: []
	W0205 03:21:21.776211   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:21:21.776221   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:21:21.776299   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:21:21.816105   64850 cri.go:89] found id: ""
	I0205 03:21:21.816141   64850 logs.go:282] 0 containers: []
	W0205 03:21:21.816153   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:21:21.816170   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:21:21.816245   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:21:21.853056   64850 cri.go:89] found id: ""
	I0205 03:21:21.853086   64850 logs.go:282] 0 containers: []
	W0205 03:21:21.853097   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:21:21.853109   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:21:21.853186   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:21:21.896247   64850 cri.go:89] found id: ""
	I0205 03:21:21.896279   64850 logs.go:282] 0 containers: []
	W0205 03:21:21.896288   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:21:21.896309   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:21:21.896330   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:21:21.946420   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:21:21.946456   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:21:21.960824   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:21:21.960853   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:21:22.040354   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:21:22.040386   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:21:22.040401   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:21:22.119484   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:21:22.119519   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:21:24.657895   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:21:24.674680   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:21:24.674758   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:21:24.708970   64850 cri.go:89] found id: ""
	I0205 03:21:24.708999   64850 logs.go:282] 0 containers: []
	W0205 03:21:24.709008   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:21:24.709017   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:21:24.709078   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:21:24.749542   64850 cri.go:89] found id: ""
	I0205 03:21:24.749576   64850 logs.go:282] 0 containers: []
	W0205 03:21:24.749585   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:21:24.749594   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:21:24.749659   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:21:24.788057   64850 cri.go:89] found id: ""
	I0205 03:21:24.788079   64850 logs.go:282] 0 containers: []
	W0205 03:21:24.788090   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:21:24.788099   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:21:24.788166   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:21:24.822895   64850 cri.go:89] found id: ""
	I0205 03:21:24.822919   64850 logs.go:282] 0 containers: []
	W0205 03:21:24.822931   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:21:24.822939   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:21:24.822993   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:21:24.859883   64850 cri.go:89] found id: ""
	I0205 03:21:24.859910   64850 logs.go:282] 0 containers: []
	W0205 03:21:24.859920   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:21:24.859927   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:21:24.860139   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:21:24.898132   64850 cri.go:89] found id: ""
	I0205 03:21:24.898158   64850 logs.go:282] 0 containers: []
	W0205 03:21:24.898169   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:21:24.898180   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:21:24.898233   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:21:24.938908   64850 cri.go:89] found id: ""
	I0205 03:21:24.938935   64850 logs.go:282] 0 containers: []
	W0205 03:21:24.938944   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:21:24.938951   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:21:24.939008   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:21:24.972914   64850 cri.go:89] found id: ""
	I0205 03:21:24.972945   64850 logs.go:282] 0 containers: []
	W0205 03:21:24.972956   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:21:24.972969   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:21:24.972983   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:21:25.031700   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:21:25.031727   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:21:25.044389   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:21:25.044453   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:21:25.126697   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:21:25.126722   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:21:25.126736   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:21:25.217170   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:21:25.217198   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:21:27.757505   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:21:27.771918   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:21:27.771994   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:21:27.805316   64850 cri.go:89] found id: ""
	I0205 03:21:27.805362   64850 logs.go:282] 0 containers: []
	W0205 03:21:27.805373   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:21:27.805381   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:21:27.805444   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:21:27.844171   64850 cri.go:89] found id: ""
	I0205 03:21:27.844204   64850 logs.go:282] 0 containers: []
	W0205 03:21:27.844216   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:21:27.844224   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:21:27.844292   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:21:27.878822   64850 cri.go:89] found id: ""
	I0205 03:21:27.878850   64850 logs.go:282] 0 containers: []
	W0205 03:21:27.878858   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:21:27.878864   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:21:27.878913   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:21:27.911775   64850 cri.go:89] found id: ""
	I0205 03:21:27.911808   64850 logs.go:282] 0 containers: []
	W0205 03:21:27.911816   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:21:27.911823   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:21:27.911883   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:21:27.945361   64850 cri.go:89] found id: ""
	I0205 03:21:27.945395   64850 logs.go:282] 0 containers: []
	W0205 03:21:27.945464   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:21:27.945484   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:21:27.945545   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:21:27.985764   64850 cri.go:89] found id: ""
	I0205 03:21:27.985792   64850 logs.go:282] 0 containers: []
	W0205 03:21:27.985802   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:21:27.985810   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:21:27.985867   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:21:28.028588   64850 cri.go:89] found id: ""
	I0205 03:21:28.028614   64850 logs.go:282] 0 containers: []
	W0205 03:21:28.028624   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:21:28.028631   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:21:28.028696   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:21:28.070021   64850 cri.go:89] found id: ""
	I0205 03:21:28.070046   64850 logs.go:282] 0 containers: []
	W0205 03:21:28.070056   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:21:28.070067   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:21:28.070082   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:21:28.087307   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:21:28.087332   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:21:28.174579   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:21:28.174601   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:21:28.174614   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:21:28.256867   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:21:28.256899   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:21:28.307670   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:21:28.307702   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:21:30.877535   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:21:30.895169   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:21:30.895228   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:21:30.936405   64850 cri.go:89] found id: ""
	I0205 03:21:30.936436   64850 logs.go:282] 0 containers: []
	W0205 03:21:30.936449   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:21:30.936457   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:21:30.936512   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:21:30.977153   64850 cri.go:89] found id: ""
	I0205 03:21:30.977182   64850 logs.go:282] 0 containers: []
	W0205 03:21:30.977193   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:21:30.977200   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:21:30.977259   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:21:31.012102   64850 cri.go:89] found id: ""
	I0205 03:21:31.012131   64850 logs.go:282] 0 containers: []
	W0205 03:21:31.012141   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:21:31.012149   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:21:31.012217   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:21:31.046599   64850 cri.go:89] found id: ""
	I0205 03:21:31.046625   64850 logs.go:282] 0 containers: []
	W0205 03:21:31.046633   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:21:31.046639   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:21:31.046688   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:21:31.091333   64850 cri.go:89] found id: ""
	I0205 03:21:31.091362   64850 logs.go:282] 0 containers: []
	W0205 03:21:31.091371   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:21:31.091377   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:21:31.091439   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:21:31.126121   64850 cri.go:89] found id: ""
	I0205 03:21:31.126149   64850 logs.go:282] 0 containers: []
	W0205 03:21:31.126156   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:21:31.126162   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:21:31.126218   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:21:31.162841   64850 cri.go:89] found id: ""
	I0205 03:21:31.162879   64850 logs.go:282] 0 containers: []
	W0205 03:21:31.162891   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:21:31.162898   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:21:31.162964   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:21:31.198662   64850 cri.go:89] found id: ""
	I0205 03:21:31.198690   64850 logs.go:282] 0 containers: []
	W0205 03:21:31.198697   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:21:31.198705   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:21:31.198719   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:21:31.252605   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:21:31.252642   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:21:31.267781   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:21:31.267810   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:21:31.367466   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:21:31.367490   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:21:31.367501   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:21:31.458735   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:21:31.458769   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:21:33.998634   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:21:34.016369   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:21:34.016434   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:21:34.062491   64850 cri.go:89] found id: ""
	I0205 03:21:34.062519   64850 logs.go:282] 0 containers: []
	W0205 03:21:34.062529   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:21:34.062537   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:21:34.062599   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:21:34.107753   64850 cri.go:89] found id: ""
	I0205 03:21:34.107783   64850 logs.go:282] 0 containers: []
	W0205 03:21:34.107794   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:21:34.107805   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:21:34.107866   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:21:34.155383   64850 cri.go:89] found id: ""
	I0205 03:21:34.155428   64850 logs.go:282] 0 containers: []
	W0205 03:21:34.155440   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:21:34.155447   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:21:34.155505   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:21:34.199342   64850 cri.go:89] found id: ""
	I0205 03:21:34.199377   64850 logs.go:282] 0 containers: []
	W0205 03:21:34.199388   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:21:34.199406   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:21:34.199470   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:21:34.236758   64850 cri.go:89] found id: ""
	I0205 03:21:34.236842   64850 logs.go:282] 0 containers: []
	W0205 03:21:34.236878   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:21:34.236885   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:21:34.236960   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:21:34.283017   64850 cri.go:89] found id: ""
	I0205 03:21:34.283048   64850 logs.go:282] 0 containers: []
	W0205 03:21:34.283058   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:21:34.283066   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:21:34.283126   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:21:34.329128   64850 cri.go:89] found id: ""
	I0205 03:21:34.329157   64850 logs.go:282] 0 containers: []
	W0205 03:21:34.329168   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:21:34.329175   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:21:34.329233   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:21:34.372663   64850 cri.go:89] found id: ""
	I0205 03:21:34.372692   64850 logs.go:282] 0 containers: []
	W0205 03:21:34.372710   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:21:34.372723   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:21:34.372737   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:21:34.461811   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:21:34.461840   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:21:34.461854   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:21:34.541110   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:21:34.541150   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:21:34.590230   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:21:34.590263   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:21:34.655623   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:21:34.655649   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:21:37.171138   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:21:37.185442   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:21:37.185502   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:21:37.220621   64850 cri.go:89] found id: ""
	I0205 03:21:37.220655   64850 logs.go:282] 0 containers: []
	W0205 03:21:37.220665   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:21:37.220672   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:21:37.220731   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:21:37.255166   64850 cri.go:89] found id: ""
	I0205 03:21:37.255193   64850 logs.go:282] 0 containers: []
	W0205 03:21:37.255200   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:21:37.255208   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:21:37.255258   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:21:37.288692   64850 cri.go:89] found id: ""
	I0205 03:21:37.288723   64850 logs.go:282] 0 containers: []
	W0205 03:21:37.288738   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:21:37.288746   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:21:37.288811   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:21:37.325305   64850 cri.go:89] found id: ""
	I0205 03:21:37.325358   64850 logs.go:282] 0 containers: []
	W0205 03:21:37.325372   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:21:37.325379   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:21:37.325447   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:21:37.360896   64850 cri.go:89] found id: ""
	I0205 03:21:37.360925   64850 logs.go:282] 0 containers: []
	W0205 03:21:37.360935   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:21:37.360942   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:21:37.361003   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:21:37.392390   64850 cri.go:89] found id: ""
	I0205 03:21:37.392421   64850 logs.go:282] 0 containers: []
	W0205 03:21:37.392431   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:21:37.392440   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:21:37.392509   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:21:37.432148   64850 cri.go:89] found id: ""
	I0205 03:21:37.432180   64850 logs.go:282] 0 containers: []
	W0205 03:21:37.432191   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:21:37.432201   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:21:37.432258   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:21:37.463792   64850 cri.go:89] found id: ""
	I0205 03:21:37.463822   64850 logs.go:282] 0 containers: []
	W0205 03:21:37.463833   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:21:37.463843   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:21:37.463855   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:21:37.514787   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:21:37.514821   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:21:37.527445   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:21:37.527472   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:21:37.606263   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:21:37.606285   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:21:37.606297   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:21:37.679688   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:21:37.679721   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:21:40.221897   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:21:40.236222   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:21:40.236313   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:21:40.276762   64850 cri.go:89] found id: ""
	I0205 03:21:40.276790   64850 logs.go:282] 0 containers: []
	W0205 03:21:40.276811   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:21:40.276819   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:21:40.276886   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:21:40.315426   64850 cri.go:89] found id: ""
	I0205 03:21:40.315458   64850 logs.go:282] 0 containers: []
	W0205 03:21:40.315468   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:21:40.315476   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:21:40.315537   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:21:40.351055   64850 cri.go:89] found id: ""
	I0205 03:21:40.351083   64850 logs.go:282] 0 containers: []
	W0205 03:21:40.351092   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:21:40.351100   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:21:40.351171   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:21:40.391168   64850 cri.go:89] found id: ""
	I0205 03:21:40.391202   64850 logs.go:282] 0 containers: []
	W0205 03:21:40.391214   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:21:40.391222   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:21:40.391285   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:21:40.433281   64850 cri.go:89] found id: ""
	I0205 03:21:40.433319   64850 logs.go:282] 0 containers: []
	W0205 03:21:40.433329   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:21:40.433360   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:21:40.433425   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:21:40.469243   64850 cri.go:89] found id: ""
	I0205 03:21:40.469280   64850 logs.go:282] 0 containers: []
	W0205 03:21:40.469293   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:21:40.469301   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:21:40.469389   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:21:40.501530   64850 cri.go:89] found id: ""
	I0205 03:21:40.501557   64850 logs.go:282] 0 containers: []
	W0205 03:21:40.501568   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:21:40.501577   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:21:40.501651   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:21:40.538087   64850 cri.go:89] found id: ""
	I0205 03:21:40.538119   64850 logs.go:282] 0 containers: []
	W0205 03:21:40.538130   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:21:40.538142   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:21:40.538155   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:21:40.551490   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:21:40.551529   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:21:40.631334   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:21:40.631362   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:21:40.631374   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:21:40.710681   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:21:40.710717   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:21:40.746669   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:21:40.746700   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:21:43.296442   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:21:43.311737   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:21:43.311817   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:21:43.349589   64850 cri.go:89] found id: ""
	I0205 03:21:43.349617   64850 logs.go:282] 0 containers: []
	W0205 03:21:43.349627   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:21:43.349634   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:21:43.349682   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:21:43.389307   64850 cri.go:89] found id: ""
	I0205 03:21:43.389335   64850 logs.go:282] 0 containers: []
	W0205 03:21:43.389359   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:21:43.389368   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:21:43.389424   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:21:43.426701   64850 cri.go:89] found id: ""
	I0205 03:21:43.426735   64850 logs.go:282] 0 containers: []
	W0205 03:21:43.426746   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:21:43.426755   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:21:43.426815   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:21:43.466775   64850 cri.go:89] found id: ""
	I0205 03:21:43.466802   64850 logs.go:282] 0 containers: []
	W0205 03:21:43.466812   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:21:43.466819   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:21:43.466873   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:21:43.512403   64850 cri.go:89] found id: ""
	I0205 03:21:43.512439   64850 logs.go:282] 0 containers: []
	W0205 03:21:43.512450   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:21:43.512458   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:21:43.512525   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:21:43.557034   64850 cri.go:89] found id: ""
	I0205 03:21:43.557067   64850 logs.go:282] 0 containers: []
	W0205 03:21:43.557078   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:21:43.557086   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:21:43.557157   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:21:43.596161   64850 cri.go:89] found id: ""
	I0205 03:21:43.596191   64850 logs.go:282] 0 containers: []
	W0205 03:21:43.596202   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:21:43.596210   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:21:43.596264   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:21:43.627265   64850 cri.go:89] found id: ""
	I0205 03:21:43.627298   64850 logs.go:282] 0 containers: []
	W0205 03:21:43.627309   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:21:43.627320   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:21:43.627333   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:21:43.690064   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:21:43.690094   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:21:43.690108   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:21:43.765764   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:21:43.765806   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:21:43.801385   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:21:43.801420   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:21:43.852143   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:21:43.852174   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:21:46.366915   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:21:46.383379   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:21:46.383455   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:21:46.422901   64850 cri.go:89] found id: ""
	I0205 03:21:46.422928   64850 logs.go:282] 0 containers: []
	W0205 03:21:46.422939   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:21:46.422947   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:21:46.423007   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:21:46.467076   64850 cri.go:89] found id: ""
	I0205 03:21:46.467107   64850 logs.go:282] 0 containers: []
	W0205 03:21:46.467118   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:21:46.467127   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:21:46.467200   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:21:46.509612   64850 cri.go:89] found id: ""
	I0205 03:21:46.509644   64850 logs.go:282] 0 containers: []
	W0205 03:21:46.509656   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:21:46.509669   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:21:46.509731   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:21:46.550912   64850 cri.go:89] found id: ""
	I0205 03:21:46.550943   64850 logs.go:282] 0 containers: []
	W0205 03:21:46.550964   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:21:46.550972   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:21:46.551047   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:21:46.585746   64850 cri.go:89] found id: ""
	I0205 03:21:46.585775   64850 logs.go:282] 0 containers: []
	W0205 03:21:46.585786   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:21:46.585795   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:21:46.585853   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:21:46.620196   64850 cri.go:89] found id: ""
	I0205 03:21:46.620226   64850 logs.go:282] 0 containers: []
	W0205 03:21:46.620237   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:21:46.620244   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:21:46.620293   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:21:46.658035   64850 cri.go:89] found id: ""
	I0205 03:21:46.658060   64850 logs.go:282] 0 containers: []
	W0205 03:21:46.658067   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:21:46.658072   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:21:46.658123   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:21:46.690731   64850 cri.go:89] found id: ""
	I0205 03:21:46.690765   64850 logs.go:282] 0 containers: []
	W0205 03:21:46.690775   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:21:46.690789   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:21:46.690802   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:21:46.753705   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:21:46.753736   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:21:46.753752   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:21:46.827465   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:21:46.827503   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:21:46.865975   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:21:46.866011   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:21:46.915282   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:21:46.915321   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:21:49.431579   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:21:49.444234   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:21:49.444290   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:21:49.476550   64850 cri.go:89] found id: ""
	I0205 03:21:49.476580   64850 logs.go:282] 0 containers: []
	W0205 03:21:49.476591   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:21:49.476599   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:21:49.476658   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:21:49.511524   64850 cri.go:89] found id: ""
	I0205 03:21:49.511565   64850 logs.go:282] 0 containers: []
	W0205 03:21:49.511577   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:21:49.511586   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:21:49.511653   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:21:49.543481   64850 cri.go:89] found id: ""
	I0205 03:21:49.543507   64850 logs.go:282] 0 containers: []
	W0205 03:21:49.543514   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:21:49.543520   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:21:49.543574   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:21:49.575179   64850 cri.go:89] found id: ""
	I0205 03:21:49.575207   64850 logs.go:282] 0 containers: []
	W0205 03:21:49.575216   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:21:49.575223   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:21:49.575289   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:21:49.606274   64850 cri.go:89] found id: ""
	I0205 03:21:49.606297   64850 logs.go:282] 0 containers: []
	W0205 03:21:49.606304   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:21:49.606309   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:21:49.606366   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:21:49.638315   64850 cri.go:89] found id: ""
	I0205 03:21:49.638356   64850 logs.go:282] 0 containers: []
	W0205 03:21:49.638367   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:21:49.638375   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:21:49.638445   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:21:49.683526   64850 cri.go:89] found id: ""
	I0205 03:21:49.683549   64850 logs.go:282] 0 containers: []
	W0205 03:21:49.683557   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:21:49.683563   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:21:49.683619   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:21:49.714408   64850 cri.go:89] found id: ""
	I0205 03:21:49.714432   64850 logs.go:282] 0 containers: []
	W0205 03:21:49.714440   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:21:49.714457   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:21:49.714468   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:21:49.792115   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:21:49.792139   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:21:49.792169   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:21:49.876064   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:21:49.876102   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:21:49.914343   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:21:49.914389   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:21:49.961217   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:21:49.961251   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:21:52.475684   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:21:52.491002   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:21:52.491066   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:21:52.528037   64850 cri.go:89] found id: ""
	I0205 03:21:52.528067   64850 logs.go:282] 0 containers: []
	W0205 03:21:52.528075   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:21:52.528080   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:21:52.528130   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:21:52.563438   64850 cri.go:89] found id: ""
	I0205 03:21:52.563468   64850 logs.go:282] 0 containers: []
	W0205 03:21:52.563480   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:21:52.563487   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:21:52.563541   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:21:52.600070   64850 cri.go:89] found id: ""
	I0205 03:21:52.600099   64850 logs.go:282] 0 containers: []
	W0205 03:21:52.600111   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:21:52.600117   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:21:52.600175   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:21:52.639033   64850 cri.go:89] found id: ""
	I0205 03:21:52.639064   64850 logs.go:282] 0 containers: []
	W0205 03:21:52.639075   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:21:52.639082   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:21:52.639141   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:21:52.677519   64850 cri.go:89] found id: ""
	I0205 03:21:52.677550   64850 logs.go:282] 0 containers: []
	W0205 03:21:52.677557   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:21:52.677563   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:21:52.677611   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:21:52.709060   64850 cri.go:89] found id: ""
	I0205 03:21:52.709088   64850 logs.go:282] 0 containers: []
	W0205 03:21:52.709099   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:21:52.709106   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:21:52.709185   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:21:52.740897   64850 cri.go:89] found id: ""
	I0205 03:21:52.740924   64850 logs.go:282] 0 containers: []
	W0205 03:21:52.740935   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:21:52.740942   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:21:52.740985   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:21:52.774160   64850 cri.go:89] found id: ""
	I0205 03:21:52.774193   64850 logs.go:282] 0 containers: []
	W0205 03:21:52.774204   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:21:52.774215   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:21:52.774228   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:21:52.860753   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:21:52.860785   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:21:52.902240   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:21:52.902265   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:21:52.952954   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:21:52.952995   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:21:52.968003   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:21:52.968031   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:21:53.046766   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:21:55.547372   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:21:55.566919   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:21:55.567003   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:21:55.610970   64850 cri.go:89] found id: ""
	I0205 03:21:55.611004   64850 logs.go:282] 0 containers: []
	W0205 03:21:55.611015   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:21:55.611023   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:21:55.611100   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:21:55.655910   64850 cri.go:89] found id: ""
	I0205 03:21:55.655938   64850 logs.go:282] 0 containers: []
	W0205 03:21:55.655948   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:21:55.655956   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:21:55.656014   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:21:55.704803   64850 cri.go:89] found id: ""
	I0205 03:21:55.704828   64850 logs.go:282] 0 containers: []
	W0205 03:21:55.704835   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:21:55.704842   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:21:55.704892   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:21:55.752519   64850 cri.go:89] found id: ""
	I0205 03:21:55.752549   64850 logs.go:282] 0 containers: []
	W0205 03:21:55.752559   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:21:55.752566   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:21:55.752638   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:21:55.827563   64850 cri.go:89] found id: ""
	I0205 03:21:55.827591   64850 logs.go:282] 0 containers: []
	W0205 03:21:55.827602   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:21:55.827610   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:21:55.827667   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:21:55.884553   64850 cri.go:89] found id: ""
	I0205 03:21:55.884588   64850 logs.go:282] 0 containers: []
	W0205 03:21:55.884613   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:21:55.884623   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:21:55.884685   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:21:55.936080   64850 cri.go:89] found id: ""
	I0205 03:21:55.936111   64850 logs.go:282] 0 containers: []
	W0205 03:21:55.936121   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:21:55.936129   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:21:55.936187   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:21:55.985188   64850 cri.go:89] found id: ""
	I0205 03:21:55.985213   64850 logs.go:282] 0 containers: []
	W0205 03:21:55.985224   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:21:55.985235   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:21:55.985251   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:21:56.036146   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:21:56.036188   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:21:56.114454   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:21:56.114556   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:21:56.132051   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:21:56.132074   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:21:56.223306   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:21:56.223338   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:21:56.223352   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:21:58.824937   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:21:58.843555   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:21:58.843633   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:21:58.895806   64850 cri.go:89] found id: ""
	I0205 03:21:58.895828   64850 logs.go:282] 0 containers: []
	W0205 03:21:58.895837   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:21:58.895844   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:21:58.895894   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:21:58.937307   64850 cri.go:89] found id: ""
	I0205 03:21:58.937333   64850 logs.go:282] 0 containers: []
	W0205 03:21:58.937365   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:21:58.937372   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:21:58.937431   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:21:58.972297   64850 cri.go:89] found id: ""
	I0205 03:21:58.972331   64850 logs.go:282] 0 containers: []
	W0205 03:21:58.972341   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:21:58.972348   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:21:58.972410   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:21:59.017330   64850 cri.go:89] found id: ""
	I0205 03:21:59.017380   64850 logs.go:282] 0 containers: []
	W0205 03:21:59.017391   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:21:59.017399   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:21:59.017452   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:21:59.058484   64850 cri.go:89] found id: ""
	I0205 03:21:59.058512   64850 logs.go:282] 0 containers: []
	W0205 03:21:59.058523   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:21:59.058531   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:21:59.058590   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:21:59.101870   64850 cri.go:89] found id: ""
	I0205 03:21:59.101914   64850 logs.go:282] 0 containers: []
	W0205 03:21:59.101926   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:21:59.101936   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:21:59.102001   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:21:59.142967   64850 cri.go:89] found id: ""
	I0205 03:21:59.142994   64850 logs.go:282] 0 containers: []
	W0205 03:21:59.143004   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:21:59.143012   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:21:59.143071   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:21:59.176965   64850 cri.go:89] found id: ""
	I0205 03:21:59.176996   64850 logs.go:282] 0 containers: []
	W0205 03:21:59.177007   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:21:59.177018   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:21:59.177030   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:21:59.194031   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:21:59.194057   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:21:59.270785   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:21:59.270811   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:21:59.270828   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:21:59.354048   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:21:59.354096   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:21:59.401060   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:21:59.401091   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:22:01.967427   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:22:01.980782   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:22:01.980860   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:22:02.014247   64850 cri.go:89] found id: ""
	I0205 03:22:02.014280   64850 logs.go:282] 0 containers: []
	W0205 03:22:02.014293   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:22:02.014301   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:22:02.014363   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:22:02.050676   64850 cri.go:89] found id: ""
	I0205 03:22:02.050700   64850 logs.go:282] 0 containers: []
	W0205 03:22:02.050707   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:22:02.050713   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:22:02.050759   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:22:02.084707   64850 cri.go:89] found id: ""
	I0205 03:22:02.084735   64850 logs.go:282] 0 containers: []
	W0205 03:22:02.084747   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:22:02.084755   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:22:02.084814   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:22:02.120559   64850 cri.go:89] found id: ""
	I0205 03:22:02.120591   64850 logs.go:282] 0 containers: []
	W0205 03:22:02.120602   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:22:02.120609   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:22:02.120670   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:22:02.155040   64850 cri.go:89] found id: ""
	I0205 03:22:02.155070   64850 logs.go:282] 0 containers: []
	W0205 03:22:02.155081   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:22:02.155089   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:22:02.155150   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:22:02.193096   64850 cri.go:89] found id: ""
	I0205 03:22:02.193121   64850 logs.go:282] 0 containers: []
	W0205 03:22:02.193130   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:22:02.193139   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:22:02.193198   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:22:02.232749   64850 cri.go:89] found id: ""
	I0205 03:22:02.232780   64850 logs.go:282] 0 containers: []
	W0205 03:22:02.232790   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:22:02.232808   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:22:02.232860   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:22:02.277247   64850 cri.go:89] found id: ""
	I0205 03:22:02.277272   64850 logs.go:282] 0 containers: []
	W0205 03:22:02.277283   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:22:02.277294   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:22:02.277310   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:22:02.345576   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:22:02.345672   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:22:02.366400   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:22:02.366434   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:22:02.452833   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:22:02.452859   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:22:02.452876   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:22:02.547012   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:22:02.547050   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:22:05.085464   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:22:05.098993   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:22:05.099069   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:22:05.144570   64850 cri.go:89] found id: ""
	I0205 03:22:05.144601   64850 logs.go:282] 0 containers: []
	W0205 03:22:05.144613   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:22:05.144620   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:22:05.144682   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:22:05.180833   64850 cri.go:89] found id: ""
	I0205 03:22:05.180867   64850 logs.go:282] 0 containers: []
	W0205 03:22:05.180880   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:22:05.180887   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:22:05.180951   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:22:05.226904   64850 cri.go:89] found id: ""
	I0205 03:22:05.226938   64850 logs.go:282] 0 containers: []
	W0205 03:22:05.226949   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:22:05.226957   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:22:05.227037   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:22:05.266470   64850 cri.go:89] found id: ""
	I0205 03:22:05.266561   64850 logs.go:282] 0 containers: []
	W0205 03:22:05.266577   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:22:05.266586   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:22:05.266655   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:22:05.302926   64850 cri.go:89] found id: ""
	I0205 03:22:05.302960   64850 logs.go:282] 0 containers: []
	W0205 03:22:05.302972   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:22:05.302981   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:22:05.303043   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:22:05.346622   64850 cri.go:89] found id: ""
	I0205 03:22:05.346656   64850 logs.go:282] 0 containers: []
	W0205 03:22:05.346669   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:22:05.346679   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:22:05.346767   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:22:05.386228   64850 cri.go:89] found id: ""
	I0205 03:22:05.386265   64850 logs.go:282] 0 containers: []
	W0205 03:22:05.386277   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:22:05.386286   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:22:05.386355   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:22:05.430940   64850 cri.go:89] found id: ""
	I0205 03:22:05.430979   64850 logs.go:282] 0 containers: []
	W0205 03:22:05.430991   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:22:05.431003   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:22:05.431022   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:22:05.494480   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:22:05.494519   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:22:05.509952   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:22:05.509984   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:22:05.587429   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:22:05.587464   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:22:05.587479   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:22:05.702558   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:22:05.702602   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:22:08.253491   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:22:08.275897   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:22:08.275969   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:22:08.326739   64850 cri.go:89] found id: ""
	I0205 03:22:08.326780   64850 logs.go:282] 0 containers: []
	W0205 03:22:08.326791   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:22:08.326799   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:22:08.326865   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:22:08.370361   64850 cri.go:89] found id: ""
	I0205 03:22:08.370392   64850 logs.go:282] 0 containers: []
	W0205 03:22:08.370402   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:22:08.370410   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:22:08.370505   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:22:08.413684   64850 cri.go:89] found id: ""
	I0205 03:22:08.413713   64850 logs.go:282] 0 containers: []
	W0205 03:22:08.413724   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:22:08.413731   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:22:08.413783   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:22:08.456120   64850 cri.go:89] found id: ""
	I0205 03:22:08.456142   64850 logs.go:282] 0 containers: []
	W0205 03:22:08.456149   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:22:08.456155   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:22:08.456223   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:22:08.499377   64850 cri.go:89] found id: ""
	I0205 03:22:08.499404   64850 logs.go:282] 0 containers: []
	W0205 03:22:08.499414   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:22:08.499421   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:22:08.499483   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:22:08.538104   64850 cri.go:89] found id: ""
	I0205 03:22:08.538129   64850 logs.go:282] 0 containers: []
	W0205 03:22:08.538140   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:22:08.538148   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:22:08.538215   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:22:08.574488   64850 cri.go:89] found id: ""
	I0205 03:22:08.574520   64850 logs.go:282] 0 containers: []
	W0205 03:22:08.574532   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:22:08.574540   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:22:08.574614   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:22:08.606244   64850 cri.go:89] found id: ""
	I0205 03:22:08.606270   64850 logs.go:282] 0 containers: []
	W0205 03:22:08.606280   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:22:08.606291   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:22:08.606306   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:22:08.678220   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:22:08.678244   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:22:08.678259   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:22:08.755397   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:22:08.755431   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:22:08.797190   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:22:08.797221   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:22:08.862176   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:22:08.862209   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:22:11.377693   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:22:11.393927   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:22:11.393996   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:22:11.435852   64850 cri.go:89] found id: ""
	I0205 03:22:11.435878   64850 logs.go:282] 0 containers: []
	W0205 03:22:11.435890   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:22:11.435899   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:22:11.435958   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:22:11.478953   64850 cri.go:89] found id: ""
	I0205 03:22:11.478983   64850 logs.go:282] 0 containers: []
	W0205 03:22:11.478995   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:22:11.479003   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:22:11.479064   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:22:11.517082   64850 cri.go:89] found id: ""
	I0205 03:22:11.517114   64850 logs.go:282] 0 containers: []
	W0205 03:22:11.517125   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:22:11.517134   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:22:11.517207   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:22:11.556101   64850 cri.go:89] found id: ""
	I0205 03:22:11.556144   64850 logs.go:282] 0 containers: []
	W0205 03:22:11.556155   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:22:11.556162   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:22:11.556227   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:22:11.598743   64850 cri.go:89] found id: ""
	I0205 03:22:11.598773   64850 logs.go:282] 0 containers: []
	W0205 03:22:11.598784   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:22:11.598792   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:22:11.598854   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:22:11.634810   64850 cri.go:89] found id: ""
	I0205 03:22:11.634840   64850 logs.go:282] 0 containers: []
	W0205 03:22:11.634851   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:22:11.634859   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:22:11.634910   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:22:11.669252   64850 cri.go:89] found id: ""
	I0205 03:22:11.669281   64850 logs.go:282] 0 containers: []
	W0205 03:22:11.669292   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:22:11.669300   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:22:11.669372   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:22:11.700928   64850 cri.go:89] found id: ""
	I0205 03:22:11.700951   64850 logs.go:282] 0 containers: []
	W0205 03:22:11.700958   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:22:11.700967   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:22:11.700982   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:22:11.767488   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:22:11.767523   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:22:11.783858   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:22:11.783890   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:22:11.867939   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:22:11.867962   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:22:11.867974   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:22:11.951839   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:22:11.951879   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:22:14.505499   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:22:14.519780   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:22:14.519866   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:22:14.561510   64850 cri.go:89] found id: ""
	I0205 03:22:14.561550   64850 logs.go:282] 0 containers: []
	W0205 03:22:14.561565   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:22:14.561575   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:22:14.561680   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:22:14.600739   64850 cri.go:89] found id: ""
	I0205 03:22:14.600772   64850 logs.go:282] 0 containers: []
	W0205 03:22:14.600783   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:22:14.600790   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:22:14.600858   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:22:14.647457   64850 cri.go:89] found id: ""
	I0205 03:22:14.647483   64850 logs.go:282] 0 containers: []
	W0205 03:22:14.647493   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:22:14.647500   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:22:14.647563   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:22:14.695487   64850 cri.go:89] found id: ""
	I0205 03:22:14.695518   64850 logs.go:282] 0 containers: []
	W0205 03:22:14.695529   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:22:14.695537   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:22:14.695597   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:22:14.741975   64850 cri.go:89] found id: ""
	I0205 03:22:14.742004   64850 logs.go:282] 0 containers: []
	W0205 03:22:14.742022   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:22:14.742030   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:22:14.742098   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:22:14.786053   64850 cri.go:89] found id: ""
	I0205 03:22:14.786087   64850 logs.go:282] 0 containers: []
	W0205 03:22:14.786099   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:22:14.786106   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:22:14.786169   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:22:14.828070   64850 cri.go:89] found id: ""
	I0205 03:22:14.828115   64850 logs.go:282] 0 containers: []
	W0205 03:22:14.828125   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:22:14.828134   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:22:14.828207   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:22:14.866009   64850 cri.go:89] found id: ""
	I0205 03:22:14.866039   64850 logs.go:282] 0 containers: []
	W0205 03:22:14.866049   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:22:14.866061   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:22:14.866076   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:22:14.919391   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:22:14.919424   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:22:14.933862   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:22:14.933895   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:22:15.009074   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:22:15.009106   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:22:15.009123   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:22:15.083282   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:22:15.083317   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:22:17.623032   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:22:17.641520   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:22:17.641594   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:22:17.683056   64850 cri.go:89] found id: ""
	I0205 03:22:17.683087   64850 logs.go:282] 0 containers: []
	W0205 03:22:17.683098   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:22:17.683106   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:22:17.683175   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:22:17.728982   64850 cri.go:89] found id: ""
	I0205 03:22:17.729013   64850 logs.go:282] 0 containers: []
	W0205 03:22:17.729023   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:22:17.729031   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:22:17.729089   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:22:17.776855   64850 cri.go:89] found id: ""
	I0205 03:22:17.776886   64850 logs.go:282] 0 containers: []
	W0205 03:22:17.776896   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:22:17.776901   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:22:17.776957   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:22:17.814488   64850 cri.go:89] found id: ""
	I0205 03:22:17.814521   64850 logs.go:282] 0 containers: []
	W0205 03:22:17.814551   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:22:17.814559   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:22:17.814626   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:22:17.855229   64850 cri.go:89] found id: ""
	I0205 03:22:17.855253   64850 logs.go:282] 0 containers: []
	W0205 03:22:17.855260   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:22:17.855266   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:22:17.855326   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:22:17.889930   64850 cri.go:89] found id: ""
	I0205 03:22:17.889958   64850 logs.go:282] 0 containers: []
	W0205 03:22:17.889968   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:22:17.889976   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:22:17.890035   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:22:17.932978   64850 cri.go:89] found id: ""
	I0205 03:22:17.933003   64850 logs.go:282] 0 containers: []
	W0205 03:22:17.933010   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:22:17.933016   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:22:17.933062   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:22:17.969818   64850 cri.go:89] found id: ""
	I0205 03:22:17.969845   64850 logs.go:282] 0 containers: []
	W0205 03:22:17.969854   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:22:17.969863   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:22:17.969878   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:22:18.012934   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:22:18.012972   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:22:18.064958   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:22:18.064997   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:22:18.078225   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:22:18.078251   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:22:18.159726   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:22:18.159753   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:22:18.159768   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:22:20.739455   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:22:20.753240   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:22:20.753299   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:22:20.786530   64850 cri.go:89] found id: ""
	I0205 03:22:20.786559   64850 logs.go:282] 0 containers: []
	W0205 03:22:20.786567   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:22:20.786573   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:22:20.786620   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:22:20.817215   64850 cri.go:89] found id: ""
	I0205 03:22:20.817242   64850 logs.go:282] 0 containers: []
	W0205 03:22:20.817250   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:22:20.817256   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:22:20.817299   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:22:20.849063   64850 cri.go:89] found id: ""
	I0205 03:22:20.849090   64850 logs.go:282] 0 containers: []
	W0205 03:22:20.849098   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:22:20.849103   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:22:20.849162   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:22:20.884190   64850 cri.go:89] found id: ""
	I0205 03:22:20.884220   64850 logs.go:282] 0 containers: []
	W0205 03:22:20.884229   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:22:20.884238   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:22:20.884297   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:22:20.914998   64850 cri.go:89] found id: ""
	I0205 03:22:20.915024   64850 logs.go:282] 0 containers: []
	W0205 03:22:20.915033   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:22:20.915040   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:22:20.915097   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:22:20.947003   64850 cri.go:89] found id: ""
	I0205 03:22:20.947035   64850 logs.go:282] 0 containers: []
	W0205 03:22:20.947046   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:22:20.947054   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:22:20.947126   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:22:20.976974   64850 cri.go:89] found id: ""
	I0205 03:22:20.977004   64850 logs.go:282] 0 containers: []
	W0205 03:22:20.977011   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:22:20.977018   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:22:20.977070   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:22:21.008814   64850 cri.go:89] found id: ""
	I0205 03:22:21.008845   64850 logs.go:282] 0 containers: []
	W0205 03:22:21.008852   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:22:21.008861   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:22:21.008871   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:22:21.086864   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:22:21.086915   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:22:21.122876   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:22:21.122908   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:22:21.178507   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:22:21.178540   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:22:21.191688   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:22:21.191723   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:22:21.262599   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:22:23.762720   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:22:23.775436   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:22:23.775513   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:22:23.805698   64850 cri.go:89] found id: ""
	I0205 03:22:23.805725   64850 logs.go:282] 0 containers: []
	W0205 03:22:23.805735   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:22:23.805743   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:22:23.805802   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:22:23.841818   64850 cri.go:89] found id: ""
	I0205 03:22:23.841844   64850 logs.go:282] 0 containers: []
	W0205 03:22:23.841854   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:22:23.841861   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:22:23.841916   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:22:23.886815   64850 cri.go:89] found id: ""
	I0205 03:22:23.886842   64850 logs.go:282] 0 containers: []
	W0205 03:22:23.886853   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:22:23.886860   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:22:23.886922   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:22:23.928824   64850 cri.go:89] found id: ""
	I0205 03:22:23.928851   64850 logs.go:282] 0 containers: []
	W0205 03:22:23.928861   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:22:23.928869   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:22:23.928925   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:22:23.972207   64850 cri.go:89] found id: ""
	I0205 03:22:23.972244   64850 logs.go:282] 0 containers: []
	W0205 03:22:23.972254   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:22:23.972262   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:22:23.972328   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:22:24.010940   64850 cri.go:89] found id: ""
	I0205 03:22:24.010967   64850 logs.go:282] 0 containers: []
	W0205 03:22:24.010975   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:22:24.010981   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:22:24.011034   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:22:24.048389   64850 cri.go:89] found id: ""
	I0205 03:22:24.048417   64850 logs.go:282] 0 containers: []
	W0205 03:22:24.048428   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:22:24.048436   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:22:24.048499   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:22:24.083097   64850 cri.go:89] found id: ""
	I0205 03:22:24.083121   64850 logs.go:282] 0 containers: []
	W0205 03:22:24.083129   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:22:24.083151   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:22:24.083165   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:22:24.181512   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:22:24.181550   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:22:24.220115   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:22:24.220142   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:22:24.275812   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:22:24.275844   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:22:24.289705   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:22:24.289743   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:22:24.372905   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:22:26.873501   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:22:26.892658   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:22:26.892742   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:22:26.939968   64850 cri.go:89] found id: ""
	I0205 03:22:26.939997   64850 logs.go:282] 0 containers: []
	W0205 03:22:26.940008   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:22:26.940016   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:22:26.940079   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:22:26.984132   64850 cri.go:89] found id: ""
	I0205 03:22:26.984161   64850 logs.go:282] 0 containers: []
	W0205 03:22:26.984172   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:22:26.984180   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:22:26.984247   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:22:27.033576   64850 cri.go:89] found id: ""
	I0205 03:22:27.033609   64850 logs.go:282] 0 containers: []
	W0205 03:22:27.033620   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:22:27.033627   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:22:27.033696   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:22:27.082695   64850 cri.go:89] found id: ""
	I0205 03:22:27.082724   64850 logs.go:282] 0 containers: []
	W0205 03:22:27.082734   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:22:27.082742   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:22:27.082805   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:22:27.122196   64850 cri.go:89] found id: ""
	I0205 03:22:27.122220   64850 logs.go:282] 0 containers: []
	W0205 03:22:27.122230   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:22:27.122237   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:22:27.122295   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:22:27.157319   64850 cri.go:89] found id: ""
	I0205 03:22:27.157380   64850 logs.go:282] 0 containers: []
	W0205 03:22:27.157397   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:22:27.157407   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:22:27.157473   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:22:27.197386   64850 cri.go:89] found id: ""
	I0205 03:22:27.197421   64850 logs.go:282] 0 containers: []
	W0205 03:22:27.197431   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:22:27.197438   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:22:27.197505   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:22:27.235221   64850 cri.go:89] found id: ""
	I0205 03:22:27.235247   64850 logs.go:282] 0 containers: []
	W0205 03:22:27.235257   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:22:27.235266   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:22:27.235279   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:22:27.286952   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:22:27.286993   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:22:27.306710   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:22:27.306757   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:22:27.388764   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:22:27.388814   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:22:27.388830   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:22:27.486295   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:22:27.486329   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:22:30.034528   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:22:30.046689   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:22:30.046760   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:22:30.079569   64850 cri.go:89] found id: ""
	I0205 03:22:30.079599   64850 logs.go:282] 0 containers: []
	W0205 03:22:30.079608   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:22:30.079614   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:22:30.079666   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:22:30.115319   64850 cri.go:89] found id: ""
	I0205 03:22:30.115348   64850 logs.go:282] 0 containers: []
	W0205 03:22:30.115360   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:22:30.115367   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:22:30.115422   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:22:30.147369   64850 cri.go:89] found id: ""
	I0205 03:22:30.147400   64850 logs.go:282] 0 containers: []
	W0205 03:22:30.147413   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:22:30.147421   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:22:30.147486   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:22:30.179974   64850 cri.go:89] found id: ""
	I0205 03:22:30.180007   64850 logs.go:282] 0 containers: []
	W0205 03:22:30.180017   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:22:30.180024   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:22:30.180084   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:22:30.211727   64850 cri.go:89] found id: ""
	I0205 03:22:30.211756   64850 logs.go:282] 0 containers: []
	W0205 03:22:30.211767   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:22:30.211774   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:22:30.211835   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:22:30.244961   64850 cri.go:89] found id: ""
	I0205 03:22:30.244979   64850 logs.go:282] 0 containers: []
	W0205 03:22:30.244986   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:22:30.244992   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:22:30.245046   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:22:30.285740   64850 cri.go:89] found id: ""
	I0205 03:22:30.285768   64850 logs.go:282] 0 containers: []
	W0205 03:22:30.285779   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:22:30.285787   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:22:30.285845   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:22:30.324632   64850 cri.go:89] found id: ""
	I0205 03:22:30.324655   64850 logs.go:282] 0 containers: []
	W0205 03:22:30.324662   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:22:30.324670   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:22:30.324681   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:22:30.337896   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:22:30.337933   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:22:30.410759   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:22:30.410788   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:22:30.410803   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:22:30.487924   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:22:30.487965   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:22:30.546423   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:22:30.546451   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:22:33.100841   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:22:33.113954   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:22:33.114021   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:22:33.149086   64850 cri.go:89] found id: ""
	I0205 03:22:33.149127   64850 logs.go:282] 0 containers: []
	W0205 03:22:33.149138   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:22:33.149147   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:22:33.149213   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:22:33.183730   64850 cri.go:89] found id: ""
	I0205 03:22:33.183760   64850 logs.go:282] 0 containers: []
	W0205 03:22:33.183771   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:22:33.183783   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:22:33.183843   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:22:33.218086   64850 cri.go:89] found id: ""
	I0205 03:22:33.218117   64850 logs.go:282] 0 containers: []
	W0205 03:22:33.218127   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:22:33.218133   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:22:33.218195   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:22:33.254497   64850 cri.go:89] found id: ""
	I0205 03:22:33.254525   64850 logs.go:282] 0 containers: []
	W0205 03:22:33.254536   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:22:33.254544   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:22:33.254601   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:22:33.295684   64850 cri.go:89] found id: ""
	I0205 03:22:33.295712   64850 logs.go:282] 0 containers: []
	W0205 03:22:33.295720   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:22:33.295726   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:22:33.295772   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:22:33.331279   64850 cri.go:89] found id: ""
	I0205 03:22:33.331309   64850 logs.go:282] 0 containers: []
	W0205 03:22:33.331320   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:22:33.331328   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:22:33.331393   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:22:33.368396   64850 cri.go:89] found id: ""
	I0205 03:22:33.368429   64850 logs.go:282] 0 containers: []
	W0205 03:22:33.368436   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:22:33.368443   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:22:33.368498   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:22:33.400455   64850 cri.go:89] found id: ""
	I0205 03:22:33.400479   64850 logs.go:282] 0 containers: []
	W0205 03:22:33.400487   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:22:33.400496   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:22:33.400506   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:22:33.485731   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:22:33.485771   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:22:33.525989   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:22:33.526019   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:22:33.575225   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:22:33.575267   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0205 03:22:33.588558   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:22:33.588586   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:22:33.657863   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:22:36.158577   64850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:22:36.175179   64850 kubeadm.go:597] duration metric: took 4m3.396543094s to restartPrimaryControlPlane
	W0205 03:22:36.175263   64850 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0205 03:22:36.175294   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0205 03:22:36.646683   64850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0205 03:22:36.665377   64850 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0205 03:22:36.678699   64850 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0205 03:22:36.691573   64850 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0205 03:22:36.691607   64850 kubeadm.go:157] found existing configuration files:
	
	I0205 03:22:36.691661   64850 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0205 03:22:36.703840   64850 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0205 03:22:36.703912   64850 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0205 03:22:36.716576   64850 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0205 03:22:36.727021   64850 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0205 03:22:36.727087   64850 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0205 03:22:36.736822   64850 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0205 03:22:36.749622   64850 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0205 03:22:36.749680   64850 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0205 03:22:36.762557   64850 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0205 03:22:36.772374   64850 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0205 03:22:36.772458   64850 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0205 03:22:36.782689   64850 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0205 03:22:36.846604   64850 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0205 03:22:36.846681   64850 kubeadm.go:310] [preflight] Running pre-flight checks
	I0205 03:22:36.983973   64850 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0205 03:22:36.984162   64850 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0205 03:22:36.984331   64850 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0205 03:22:37.174807   64850 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0205 03:22:37.177766   64850 out.go:235]   - Generating certificates and keys ...
	I0205 03:22:37.177879   64850 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0205 03:22:37.177977   64850 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0205 03:22:37.178120   64850 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0205 03:22:37.178219   64850 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0205 03:22:37.178315   64850 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0205 03:22:37.178403   64850 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0205 03:22:37.178490   64850 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0205 03:22:37.178573   64850 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0205 03:22:37.178668   64850 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0205 03:22:37.178760   64850 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0205 03:22:37.178814   64850 kubeadm.go:310] [certs] Using the existing "sa" key
	I0205 03:22:37.178891   64850 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0205 03:22:37.306413   64850 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0205 03:22:37.471811   64850 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0205 03:22:37.629113   64850 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0205 03:22:37.826547   64850 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0205 03:22:37.848683   64850 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0205 03:22:37.849856   64850 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0205 03:22:37.849924   64850 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0205 03:22:38.006980   64850 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0205 03:22:38.008730   64850 out.go:235]   - Booting up control plane ...
	I0205 03:22:38.008882   64850 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0205 03:22:38.016017   64850 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0205 03:22:38.017240   64850 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0205 03:22:38.018613   64850 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0205 03:22:38.023669   64850 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0205 03:23:18.024616   64850 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0205 03:23:18.025984   64850 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0205 03:23:18.026240   64850 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0205 03:23:23.026781   64850 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0205 03:23:23.027035   64850 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0205 03:23:33.027407   64850 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0205 03:23:33.027701   64850 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0205 03:23:53.028337   64850 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0205 03:23:53.028612   64850 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0205 03:24:33.030607   64850 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0205 03:24:33.030843   64850 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0205 03:24:33.030862   64850 kubeadm.go:310] 
	I0205 03:24:33.030925   64850 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0205 03:24:33.030981   64850 kubeadm.go:310] 		timed out waiting for the condition
	I0205 03:24:33.030992   64850 kubeadm.go:310] 
	I0205 03:24:33.031053   64850 kubeadm.go:310] 	This error is likely caused by:
	I0205 03:24:33.031097   64850 kubeadm.go:310] 		- The kubelet is not running
	I0205 03:24:33.031264   64850 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0205 03:24:33.031277   64850 kubeadm.go:310] 
	I0205 03:24:33.031426   64850 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0205 03:24:33.031465   64850 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0205 03:24:33.031540   64850 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0205 03:24:33.031564   64850 kubeadm.go:310] 
	I0205 03:24:33.031719   64850 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0205 03:24:33.031851   64850 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0205 03:24:33.031862   64850 kubeadm.go:310] 
	I0205 03:24:33.032028   64850 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0205 03:24:33.032116   64850 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0205 03:24:33.032194   64850 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0205 03:24:33.032270   64850 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0205 03:24:33.032281   64850 kubeadm.go:310] 
	I0205 03:24:33.032574   64850 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0205 03:24:33.032670   64850 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0205 03:24:33.032746   64850 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0205 03:24:33.032863   64850 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0205 03:24:33.032902   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0205 03:24:33.515529   64850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0205 03:24:33.530088   64850 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0205 03:24:33.541440   64850 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0205 03:24:33.541464   64850 kubeadm.go:157] found existing configuration files:
	
	I0205 03:24:33.541511   64850 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0205 03:24:33.550975   64850 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0205 03:24:33.551030   64850 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0205 03:24:33.563449   64850 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0205 03:24:33.572979   64850 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0205 03:24:33.573044   64850 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0205 03:24:33.584856   64850 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0205 03:24:33.595541   64850 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0205 03:24:33.595605   64850 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0205 03:24:33.607257   64850 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0205 03:24:33.617162   64850 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0205 03:24:33.617223   64850 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0205 03:24:33.627171   64850 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0205 03:24:33.697096   64850 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0205 03:24:33.697173   64850 kubeadm.go:310] [preflight] Running pre-flight checks
	I0205 03:24:33.840677   64850 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0205 03:24:33.840826   64850 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0205 03:24:33.840973   64850 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0205 03:24:34.026641   64850 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0205 03:24:34.029304   64850 out.go:235]   - Generating certificates and keys ...
	I0205 03:24:34.029441   64850 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0205 03:24:34.029559   64850 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0205 03:24:34.029654   64850 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0205 03:24:34.029755   64850 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0205 03:24:34.029829   64850 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0205 03:24:34.029891   64850 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0205 03:24:34.029977   64850 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0205 03:24:34.030080   64850 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0205 03:24:34.030169   64850 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0205 03:24:34.030292   64850 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0205 03:24:34.030362   64850 kubeadm.go:310] [certs] Using the existing "sa" key
	I0205 03:24:34.030467   64850 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0205 03:24:34.367841   64850 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0205 03:24:34.504228   64850 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0205 03:24:34.756102   64850 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0205 03:24:34.995663   64850 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0205 03:24:35.019376   64850 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0205 03:24:35.025603   64850 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0205 03:24:35.025654   64850 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0205 03:24:35.172865   64850 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0205 03:24:35.174430   64850 out.go:235]   - Booting up control plane ...
	I0205 03:24:35.174571   64850 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0205 03:24:35.181119   64850 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0205 03:24:35.181253   64850 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0205 03:24:35.181546   64850 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0205 03:24:35.183969   64850 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0205 03:25:15.186032   64850 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0205 03:25:15.186110   64850 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0205 03:25:15.186434   64850 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0205 03:25:20.187049   64850 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0205 03:25:20.187366   64850 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0205 03:25:30.187609   64850 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0205 03:25:30.187908   64850 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0205 03:25:50.188302   64850 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0205 03:25:50.188480   64850 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0205 03:26:30.188344   64850 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0205 03:26:30.188671   64850 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0205 03:26:30.188700   64850 kubeadm.go:310] 
	I0205 03:26:30.188744   64850 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0205 03:26:30.188800   64850 kubeadm.go:310] 		timed out waiting for the condition
	I0205 03:26:30.188809   64850 kubeadm.go:310] 
	I0205 03:26:30.188858   64850 kubeadm.go:310] 	This error is likely caused by:
	I0205 03:26:30.188898   64850 kubeadm.go:310] 		- The kubelet is not running
	I0205 03:26:30.188985   64850 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0205 03:26:30.188994   64850 kubeadm.go:310] 
	I0205 03:26:30.189183   64850 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0205 03:26:30.189262   64850 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0205 03:26:30.189315   64850 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0205 03:26:30.189328   64850 kubeadm.go:310] 
	I0205 03:26:30.189479   64850 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0205 03:26:30.189604   64850 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0205 03:26:30.189616   64850 kubeadm.go:310] 
	I0205 03:26:30.189794   64850 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0205 03:26:30.189910   64850 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0205 03:26:30.190015   64850 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0205 03:26:30.190114   64850 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0205 03:26:30.190170   64850 kubeadm.go:310] 
	I0205 03:26:30.190330   64850 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0205 03:26:30.190446   64850 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0205 03:26:30.190622   64850 kubeadm.go:394] duration metric: took 7m57.462882999s to StartCluster
	I0205 03:26:30.190638   64850 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0205 03:26:30.190670   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:26:30.190724   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:26:30.239529   64850 cri.go:89] found id: ""
	I0205 03:26:30.239563   64850 logs.go:282] 0 containers: []
	W0205 03:26:30.239575   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:26:30.239585   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:26:30.239655   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:26:30.280172   64850 cri.go:89] found id: ""
	I0205 03:26:30.280208   64850 logs.go:282] 0 containers: []
	W0205 03:26:30.280220   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:26:30.280229   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:26:30.280297   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:26:30.334201   64850 cri.go:89] found id: ""
	I0205 03:26:30.334228   64850 logs.go:282] 0 containers: []
	W0205 03:26:30.334238   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:26:30.334250   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:26:30.334310   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:26:30.376499   64850 cri.go:89] found id: ""
	I0205 03:26:30.376525   64850 logs.go:282] 0 containers: []
	W0205 03:26:30.376532   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:26:30.376539   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:26:30.376600   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:26:30.419583   64850 cri.go:89] found id: ""
	I0205 03:26:30.419608   64850 logs.go:282] 0 containers: []
	W0205 03:26:30.419616   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:26:30.419622   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:26:30.419681   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:26:30.457014   64850 cri.go:89] found id: ""
	I0205 03:26:30.457049   64850 logs.go:282] 0 containers: []
	W0205 03:26:30.457059   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:26:30.457067   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:26:30.457121   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:26:30.501068   64850 cri.go:89] found id: ""
	I0205 03:26:30.501091   64850 logs.go:282] 0 containers: []
	W0205 03:26:30.501098   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:26:30.501104   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:26:30.501161   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:26:30.538384   64850 cri.go:89] found id: ""
	I0205 03:26:30.538420   64850 logs.go:282] 0 containers: []
	W0205 03:26:30.538431   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:26:30.538443   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:26:30.538460   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:26:30.627025   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:26:30.627055   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:26:30.627072   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:26:30.749529   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:26:30.749561   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:26:30.794162   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:26:30.794188   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:26:30.849515   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:26:30.849555   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0205 03:26:30.865114   64850 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0205 03:26:30.865167   64850 out.go:270] * 
	* 
	W0205 03:26:30.865227   64850 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0205 03:26:30.865244   64850 out.go:270] * 
	* 
	W0205 03:26:30.866482   64850 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0205 03:26:30.869500   64850 out.go:201] 
	W0205 03:26:30.870525   64850 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0205 03:26:30.870589   64850 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0205 03:26:30.870619   64850 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0205 03:26:30.871955   64850 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-191773 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-191773 -n old-k8s-version-191773
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-191773 -n old-k8s-version-191773: exit status 2 (258.76794ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-191773 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-253147 sudo systemctl                        | auto-253147    | jenkins | v1.35.0 | 05 Feb 25 03:25 UTC | 05 Feb 25 03:25 UTC |
	|         | cat cri-docker --no-pager                            |                |         |         |                     |                     |
	| ssh     | -p auto-253147 sudo cat                              | auto-253147    | jenkins | v1.35.0 | 05 Feb 25 03:25 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                |         |         |                     |                     |
	| ssh     | -p auto-253147 sudo cat                              | auto-253147    | jenkins | v1.35.0 | 05 Feb 25 03:25 UTC | 05 Feb 25 03:25 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                |         |         |                     |                     |
	| ssh     | -p auto-253147 sudo                                  | auto-253147    | jenkins | v1.35.0 | 05 Feb 25 03:25 UTC | 05 Feb 25 03:25 UTC |
	|         | cri-dockerd --version                                |                |         |         |                     |                     |
	| ssh     | -p auto-253147 sudo systemctl                        | auto-253147    | jenkins | v1.35.0 | 05 Feb 25 03:25 UTC |                     |
	|         | status containerd --all --full                       |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-253147 sudo systemctl                        | auto-253147    | jenkins | v1.35.0 | 05 Feb 25 03:25 UTC | 05 Feb 25 03:25 UTC |
	|         | cat containerd --no-pager                            |                |         |         |                     |                     |
	| ssh     | -p auto-253147 sudo cat                              | auto-253147    | jenkins | v1.35.0 | 05 Feb 25 03:25 UTC | 05 Feb 25 03:25 UTC |
	|         | /lib/systemd/system/containerd.service               |                |         |         |                     |                     |
	| ssh     | -p auto-253147 sudo cat                              | auto-253147    | jenkins | v1.35.0 | 05 Feb 25 03:25 UTC | 05 Feb 25 03:25 UTC |
	|         | /etc/containerd/config.toml                          |                |         |         |                     |                     |
	| ssh     | -p auto-253147 sudo containerd                       | auto-253147    | jenkins | v1.35.0 | 05 Feb 25 03:25 UTC | 05 Feb 25 03:25 UTC |
	|         | config dump                                          |                |         |         |                     |                     |
	| ssh     | -p auto-253147 sudo systemctl                        | auto-253147    | jenkins | v1.35.0 | 05 Feb 25 03:25 UTC | 05 Feb 25 03:25 UTC |
	|         | status crio --all --full                             |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-253147 sudo systemctl                        | auto-253147    | jenkins | v1.35.0 | 05 Feb 25 03:25 UTC | 05 Feb 25 03:25 UTC |
	|         | cat crio --no-pager                                  |                |         |         |                     |                     |
	| ssh     | -p auto-253147 sudo find                             | auto-253147    | jenkins | v1.35.0 | 05 Feb 25 03:25 UTC | 05 Feb 25 03:25 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                |         |         |                     |                     |
	| ssh     | -p auto-253147 sudo crio                             | auto-253147    | jenkins | v1.35.0 | 05 Feb 25 03:25 UTC | 05 Feb 25 03:25 UTC |
	|         | config                                               |                |         |         |                     |                     |
	| delete  | -p auto-253147                                       | auto-253147    | jenkins | v1.35.0 | 05 Feb 25 03:25 UTC | 05 Feb 25 03:25 UTC |
	| start   | -p kindnet-253147                                    | kindnet-253147 | jenkins | v1.35.0 | 05 Feb 25 03:25 UTC | 05 Feb 25 03:26 UTC |
	|         | --memory=3072                                        |                |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2                          |                |         |         |                     |                     |
	|         | --container-runtime=crio                             |                |         |         |                     |                     |
	| ssh     | -p kindnet-253147 pgrep -a                           | kindnet-253147 | jenkins | v1.35.0 | 05 Feb 25 03:26 UTC | 05 Feb 25 03:26 UTC |
	|         | kubelet                                              |                |         |         |                     |                     |
	| ssh     | -p kindnet-253147 sudo cat                           | kindnet-253147 | jenkins | v1.35.0 | 05 Feb 25 03:26 UTC | 05 Feb 25 03:26 UTC |
	|         | /etc/nsswitch.conf                                   |                |         |         |                     |                     |
	| ssh     | -p kindnet-253147 sudo cat                           | kindnet-253147 | jenkins | v1.35.0 | 05 Feb 25 03:26 UTC | 05 Feb 25 03:26 UTC |
	|         | /etc/hosts                                           |                |         |         |                     |                     |
	| ssh     | -p kindnet-253147 sudo cat                           | kindnet-253147 | jenkins | v1.35.0 | 05 Feb 25 03:26 UTC | 05 Feb 25 03:26 UTC |
	|         | /etc/resolv.conf                                     |                |         |         |                     |                     |
	| ssh     | -p kindnet-253147 sudo crictl                        | kindnet-253147 | jenkins | v1.35.0 | 05 Feb 25 03:26 UTC | 05 Feb 25 03:26 UTC |
	|         | pods                                                 |                |         |         |                     |                     |
	| ssh     | -p kindnet-253147 sudo crictl                        | kindnet-253147 | jenkins | v1.35.0 | 05 Feb 25 03:26 UTC | 05 Feb 25 03:26 UTC |
	|         | ps --all                                             |                |         |         |                     |                     |
	| ssh     | -p kindnet-253147 sudo find                          | kindnet-253147 | jenkins | v1.35.0 | 05 Feb 25 03:26 UTC | 05 Feb 25 03:26 UTC |
	|         | /etc/cni -type f -exec sh -c                         |                |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                |         |         |                     |                     |
	| ssh     | -p kindnet-253147 sudo ip a s                        | kindnet-253147 | jenkins | v1.35.0 | 05 Feb 25 03:26 UTC | 05 Feb 25 03:26 UTC |
	| ssh     | -p kindnet-253147 sudo ip r s                        | kindnet-253147 | jenkins | v1.35.0 | 05 Feb 25 03:26 UTC | 05 Feb 25 03:26 UTC |
	| ssh     | -p kindnet-253147 sudo                               | kindnet-253147 | jenkins | v1.35.0 | 05 Feb 25 03:26 UTC | 05 Feb 25 03:26 UTC |
	|         | iptables-save                                        |                |         |         |                     |                     |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/05 03:25:07
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0205 03:25:07.495993   70401 out.go:345] Setting OutFile to fd 1 ...
	I0205 03:25:07.496133   70401 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 03:25:07.496142   70401 out.go:358] Setting ErrFile to fd 2...
	I0205 03:25:07.496147   70401 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 03:25:07.496350   70401 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20363-12788/.minikube/bin
	I0205 03:25:07.496972   70401 out.go:352] Setting JSON to false
	I0205 03:25:07.498110   70401 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":7658,"bootTime":1738718249,"procs":306,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0205 03:25:07.498243   70401 start.go:139] virtualization: kvm guest
	I0205 03:25:07.500302   70401 out.go:177] * [kindnet-253147] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0205 03:25:07.501944   70401 notify.go:220] Checking for updates...
	I0205 03:25:07.501962   70401 out.go:177]   - MINIKUBE_LOCATION=20363
	I0205 03:25:07.503222   70401 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0205 03:25:07.504431   70401 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20363-12788/kubeconfig
	I0205 03:25:07.505605   70401 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20363-12788/.minikube
	I0205 03:25:07.506835   70401 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0205 03:25:07.508165   70401 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0205 03:25:07.510039   70401 config.go:182] Loaded profile config "default-k8s-diff-port-568677": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 03:25:07.510170   70401 config.go:182] Loaded profile config "kubernetes-upgrade-024079": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 03:25:07.510296   70401 config.go:182] Loaded profile config "old-k8s-version-191773": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0205 03:25:07.510393   70401 driver.go:394] Setting default libvirt URI to qemu:///system
	I0205 03:25:07.551456   70401 out.go:177] * Using the kvm2 driver based on user configuration
	I0205 03:25:07.552605   70401 start.go:297] selected driver: kvm2
	I0205 03:25:07.552623   70401 start.go:901] validating driver "kvm2" against <nil>
	I0205 03:25:07.552639   70401 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0205 03:25:07.553707   70401 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0205 03:25:07.553823   70401 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20363-12788/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0205 03:25:07.570609   70401 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0205 03:25:07.570669   70401 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0205 03:25:07.570929   70401 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0205 03:25:07.570958   70401 cni.go:84] Creating CNI manager for "kindnet"
	I0205 03:25:07.570963   70401 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0205 03:25:07.571005   70401 start.go:340] cluster config:
	{Name:kindnet-253147 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:kindnet-253147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I0205 03:25:07.571093   70401 iso.go:125] acquiring lock: {Name:mk486603e8d6546d81b9e7d0a893261360630790 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0205 03:25:07.572823   70401 out.go:177] * Starting "kindnet-253147" primary control-plane node in "kindnet-253147" cluster
	I0205 03:25:07.573900   70401 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0205 03:25:07.573953   70401 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20363-12788/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0205 03:25:07.573970   70401 cache.go:56] Caching tarball of preloaded images
	I0205 03:25:07.574050   70401 preload.go:172] Found /home/jenkins/minikube-integration/20363-12788/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0205 03:25:07.574065   70401 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0205 03:25:07.574200   70401 profile.go:143] Saving config to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kindnet-253147/config.json ...
	I0205 03:25:07.574223   70401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kindnet-253147/config.json: {Name:mk0d2b67041549745f26f0b8be1dbefa150dd767 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:25:07.574389   70401 start.go:360] acquireMachinesLock for kindnet-253147: {Name:mka859d8706e94e04a549fcebb98cdac86bfe5a9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0205 03:25:07.574436   70401 start.go:364] duration metric: took 30.069µs to acquireMachinesLock for "kindnet-253147"
	I0205 03:25:07.574460   70401 start.go:93] Provisioning new machine with config: &{Name:kindnet-253147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:kindnet-253147 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0205 03:25:07.574537   70401 start.go:125] createHost starting for "" (driver="kvm2")
	I0205 03:25:05.873236   68832 api_server.go:279] https://192.168.72.253:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0205 03:25:05.873262   68832 api_server.go:103] status: https://192.168.72.253:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0205 03:25:05.873275   68832 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8444/healthz ...
	I0205 03:25:05.896001   68832 api_server.go:279] https://192.168.72.253:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0205 03:25:05.896028   68832 api_server.go:103] status: https://192.168.72.253:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0205 03:25:06.304639   68832 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8444/healthz ...
	I0205 03:25:06.309900   68832 api_server.go:279] https://192.168.72.253:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0205 03:25:06.309938   68832 api_server.go:103] status: https://192.168.72.253:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0205 03:25:06.804271   68832 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8444/healthz ...
	I0205 03:25:06.813052   68832 api_server.go:279] https://192.168.72.253:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0205 03:25:06.813077   68832 api_server.go:103] status: https://192.168.72.253:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0205 03:25:07.304796   68832 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8444/healthz ...
	I0205 03:25:07.311066   68832 api_server.go:279] https://192.168.72.253:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0205 03:25:07.311093   68832 api_server.go:103] status: https://192.168.72.253:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0205 03:25:07.804801   68832 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8444/healthz ...
	I0205 03:25:07.811310   68832 api_server.go:279] https://192.168.72.253:8444/healthz returned 200:
	ok
	I0205 03:25:07.818740   68832 api_server.go:141] control plane version: v1.32.1
	I0205 03:25:07.818766   68832 api_server.go:131] duration metric: took 4.514746258s to wait for apiserver health ...
	I0205 03:25:07.818773   68832 cni.go:84] Creating CNI manager for ""
	I0205 03:25:07.818779   68832 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0205 03:25:07.820519   68832 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0205 03:25:07.821724   68832 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0205 03:25:07.832645   68832 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0205 03:25:07.858630   68832 system_pods.go:43] waiting for kube-system pods to appear ...
	I0205 03:25:07.867716   68832 system_pods.go:59] 8 kube-system pods found
	I0205 03:25:07.867775   68832 system_pods.go:61] "coredns-668d6bf9bc-k62ng" [75895d31-a808-4270-9461-effa28b06c23] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0205 03:25:07.867796   68832 system_pods.go:61] "etcd-default-k8s-diff-port-568677" [6efdac1a-293d-4416-be97-2616583c5de4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0205 03:25:07.867810   68832 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-568677" [0a520e56-33de-4552-a04d-c9df7ed6970e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0205 03:25:07.867824   68832 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-568677" [20bc3fc5-bffb-4558-a234-0f4063772ad9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0205 03:25:07.867836   68832 system_pods.go:61] "kube-proxy-2fbxx" [4f40dfbd-5d10-4ff4-846c-9d403d82e055] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0205 03:25:07.867850   68832 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-568677" [8d786b83-2c4d-4236-81d9-0a46ec9d1e9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0205 03:25:07.867858   68832 system_pods.go:61] "metrics-server-f79f97bbb-k9q9v" [a8a1ac9d-6738-45bf-a77a-5350d4efb8b1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0205 03:25:07.867871   68832 system_pods.go:61] "storage-provisioner" [d3c8e7ac-2049-4fca-a719-e5ff474c1aec] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0205 03:25:07.867883   68832 system_pods.go:74] duration metric: took 9.23074ms to wait for pod list to return data ...
	I0205 03:25:07.867895   68832 node_conditions.go:102] verifying NodePressure condition ...
	I0205 03:25:07.877564   68832 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0205 03:25:07.877590   68832 node_conditions.go:123] node cpu capacity is 2
	I0205 03:25:07.877602   68832 node_conditions.go:105] duration metric: took 9.69854ms to run NodePressure ...
	I0205 03:25:07.877621   68832 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0205 03:25:08.265692   68832 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0205 03:25:08.268751   68832 kubeadm.go:739] kubelet initialised
	I0205 03:25:08.268778   68832 kubeadm.go:740] duration metric: took 3.056977ms waiting for restarted kubelet to initialise ...
	I0205 03:25:08.268787   68832 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0205 03:25:08.272378   68832 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-k62ng" in "kube-system" namespace to be "Ready" ...
	I0205 03:25:08.277879   68832 pod_ready.go:98] node "default-k8s-diff-port-568677" hosting pod "coredns-668d6bf9bc-k62ng" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-568677" has status "Ready":"False"
	I0205 03:25:08.277904   68832 pod_ready.go:82] duration metric: took 5.489658ms for pod "coredns-668d6bf9bc-k62ng" in "kube-system" namespace to be "Ready" ...
	E0205 03:25:08.277913   68832 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-568677" hosting pod "coredns-668d6bf9bc-k62ng" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-568677" has status "Ready":"False"
	I0205 03:25:08.277919   68832 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-568677" in "kube-system" namespace to be "Ready" ...
	I0205 03:25:08.282330   68832 pod_ready.go:98] node "default-k8s-diff-port-568677" hosting pod "etcd-default-k8s-diff-port-568677" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-568677" has status "Ready":"False"
	I0205 03:25:08.282359   68832 pod_ready.go:82] duration metric: took 4.430983ms for pod "etcd-default-k8s-diff-port-568677" in "kube-system" namespace to be "Ready" ...
	E0205 03:25:08.282374   68832 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-568677" hosting pod "etcd-default-k8s-diff-port-568677" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-568677" has status "Ready":"False"
	I0205 03:25:08.282383   68832 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-568677" in "kube-system" namespace to be "Ready" ...
	I0205 03:25:08.285872   68832 pod_ready.go:98] node "default-k8s-diff-port-568677" hosting pod "kube-apiserver-default-k8s-diff-port-568677" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-568677" has status "Ready":"False"
	I0205 03:25:08.285894   68832 pod_ready.go:82] duration metric: took 3.501297ms for pod "kube-apiserver-default-k8s-diff-port-568677" in "kube-system" namespace to be "Ready" ...
	E0205 03:25:08.285907   68832 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-568677" hosting pod "kube-apiserver-default-k8s-diff-port-568677" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-568677" has status "Ready":"False"
	I0205 03:25:08.285916   68832 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-568677" in "kube-system" namespace to be "Ready" ...
	I0205 03:25:08.290512   68832 pod_ready.go:98] node "default-k8s-diff-port-568677" hosting pod "kube-controller-manager-default-k8s-diff-port-568677" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-568677" has status "Ready":"False"
	I0205 03:25:08.290534   68832 pod_ready.go:82] duration metric: took 4.605033ms for pod "kube-controller-manager-default-k8s-diff-port-568677" in "kube-system" namespace to be "Ready" ...
	E0205 03:25:08.290546   68832 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-568677" hosting pod "kube-controller-manager-default-k8s-diff-port-568677" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-568677" has status "Ready":"False"
	I0205 03:25:08.290554   68832 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-2fbxx" in "kube-system" namespace to be "Ready" ...
	I0205 03:25:08.669617   68832 pod_ready.go:98] node "default-k8s-diff-port-568677" hosting pod "kube-proxy-2fbxx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-568677" has status "Ready":"False"
	I0205 03:25:08.669647   68832 pod_ready.go:82] duration metric: took 379.084241ms for pod "kube-proxy-2fbxx" in "kube-system" namespace to be "Ready" ...
	E0205 03:25:08.669660   68832 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-568677" hosting pod "kube-proxy-2fbxx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-568677" has status "Ready":"False"
	I0205 03:25:08.669668   68832 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-568677" in "kube-system" namespace to be "Ready" ...
	I0205 03:25:09.095758   68832 pod_ready.go:98] node "default-k8s-diff-port-568677" hosting pod "kube-scheduler-default-k8s-diff-port-568677" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-568677" has status "Ready":"False"
	I0205 03:25:09.095790   68832 pod_ready.go:82] duration metric: took 426.110947ms for pod "kube-scheduler-default-k8s-diff-port-568677" in "kube-system" namespace to be "Ready" ...
	E0205 03:25:09.095804   68832 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-568677" hosting pod "kube-scheduler-default-k8s-diff-port-568677" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-568677" has status "Ready":"False"
	I0205 03:25:09.095814   68832 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-k9q9v" in "kube-system" namespace to be "Ready" ...
	I0205 03:25:09.469814   68832 pod_ready.go:98] node "default-k8s-diff-port-568677" hosting pod "metrics-server-f79f97bbb-k9q9v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-568677" has status "Ready":"False"
	I0205 03:25:09.469849   68832 pod_ready.go:82] duration metric: took 374.024352ms for pod "metrics-server-f79f97bbb-k9q9v" in "kube-system" namespace to be "Ready" ...
	E0205 03:25:09.469866   68832 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-568677" hosting pod "metrics-server-f79f97bbb-k9q9v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-568677" has status "Ready":"False"
	I0205 03:25:09.469879   68832 pod_ready.go:39] duration metric: took 1.201080586s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0205 03:25:09.469901   68832 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0205 03:25:09.483849   68832 ops.go:34] apiserver oom_adj: -16
	I0205 03:25:09.483877   68832 kubeadm.go:597] duration metric: took 9.130095735s to restartPrimaryControlPlane
	I0205 03:25:09.483889   68832 kubeadm.go:394] duration metric: took 9.179063293s to StartCluster
	I0205 03:25:09.483910   68832 settings.go:142] acquiring lock: {Name:mk2eca847da5ba78f5b041a83e5cfcbdebb0c621 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:25:09.484008   68832 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20363-12788/kubeconfig
	I0205 03:25:09.485092   68832 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/kubeconfig: {Name:mkb405c4292c681fd728af1af9684132d7e45754 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:25:09.485328   68832 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.253 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0205 03:25:09.485460   68832 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0205 03:25:09.485544   68832 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-568677"
	I0205 03:25:09.485569   68832 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-568677"
	W0205 03:25:09.485581   68832 addons.go:247] addon storage-provisioner should already be in state true
	I0205 03:25:09.485613   68832 host.go:66] Checking if "default-k8s-diff-port-568677" exists ...
	I0205 03:25:09.485611   68832 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-568677"
	I0205 03:25:09.485617   68832 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-568677"
	I0205 03:25:09.485642   68832 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-568677"
	W0205 03:25:09.485657   68832 addons.go:247] addon metrics-server should already be in state true
	I0205 03:25:09.485693   68832 host.go:66] Checking if "default-k8s-diff-port-568677" exists ...
	I0205 03:25:09.485642   68832 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-568677"
	I0205 03:25:09.486038   68832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:25:09.486087   68832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:25:09.486175   68832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:25:09.486195   68832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:25:09.486220   68832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:25:09.486223   68832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:25:09.486321   68832 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-568677"
	I0205 03:25:09.486356   68832 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-568677"
	W0205 03:25:09.486368   68832 addons.go:247] addon dashboard should already be in state true
	I0205 03:25:09.486440   68832 config.go:182] Loaded profile config "default-k8s-diff-port-568677": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 03:25:09.486566   68832 host.go:66] Checking if "default-k8s-diff-port-568677" exists ...
	I0205 03:25:09.487022   68832 out.go:177] * Verifying Kubernetes components...
	I0205 03:25:09.487054   68832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:25:09.487123   68832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:25:09.494416   68832 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0205 03:25:09.508360   68832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33961
	I0205 03:25:09.508380   68832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35347
	I0205 03:25:09.508806   68832 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:25:09.508829   68832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45567
	I0205 03:25:09.508933   68832 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:25:09.509215   68832 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:25:09.509431   68832 main.go:141] libmachine: Using API Version  1
	I0205 03:25:09.509447   68832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:25:09.509787   68832 main.go:141] libmachine: Using API Version  1
	I0205 03:25:09.509810   68832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:25:09.509882   68832 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:25:09.509914   68832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33589
	I0205 03:25:09.510184   68832 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:25:09.510235   68832 main.go:141] libmachine: (default-k8s-diff-port-568677) Calling .GetState
	I0205 03:25:09.510493   68832 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:25:09.510776   68832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:25:09.510825   68832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:25:09.510847   68832 main.go:141] libmachine: Using API Version  1
	I0205 03:25:09.510857   68832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:25:09.511225   68832 main.go:141] libmachine: Using API Version  1
	I0205 03:25:09.511239   68832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:25:09.511282   68832 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:25:09.511739   68832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:25:09.511761   68832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:25:09.513783   68832 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-568677"
	W0205 03:25:09.513805   68832 addons.go:247] addon default-storageclass should already be in state true
	I0205 03:25:09.513833   68832 host.go:66] Checking if "default-k8s-diff-port-568677" exists ...
	I0205 03:25:09.514204   68832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:25:09.514246   68832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:25:09.514899   68832 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:25:09.515468   68832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:25:09.515509   68832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:25:09.528179   68832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43155
	I0205 03:25:09.530433   68832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46563
	I0205 03:25:09.538031   68832 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:25:09.538759   68832 main.go:141] libmachine: Using API Version  1
	I0205 03:25:09.538792   68832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:25:09.539218   68832 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:25:09.539492   68832 main.go:141] libmachine: (default-k8s-diff-port-568677) Calling .GetState
	I0205 03:25:09.539805   68832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43455
	I0205 03:25:09.540785   68832 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:25:09.541457   68832 main.go:141] libmachine: (default-k8s-diff-port-568677) Calling .DriverName
	I0205 03:25:09.541777   68832 main.go:141] libmachine: Using API Version  1
	I0205 03:25:09.541799   68832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:25:09.542193   68832 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:25:09.542765   68832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:25:09.542811   68832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:25:09.544017   68832 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:25:09.544148   68832 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0205 03:25:09.544820   68832 main.go:141] libmachine: Using API Version  1
	I0205 03:25:09.544847   68832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:25:09.545319   68832 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:25:09.545696   68832 main.go:141] libmachine: (default-k8s-diff-port-568677) Calling .GetState
	I0205 03:25:09.546712   68832 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0205 03:25:09.547849   68832 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0205 03:25:09.547874   68832 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0205 03:25:09.547897   68832 main.go:141] libmachine: (default-k8s-diff-port-568677) Calling .GetSSHHostname
	I0205 03:25:09.548689   68832 main.go:141] libmachine: (default-k8s-diff-port-568677) Calling .DriverName
	I0205 03:25:09.550234   68832 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0205 03:25:09.551684   68832 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0205 03:25:09.551706   68832 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0205 03:25:09.551730   68832 main.go:141] libmachine: (default-k8s-diff-port-568677) Calling .GetSSHHostname
	I0205 03:25:09.552807   68832 main.go:141] libmachine: (default-k8s-diff-port-568677) DBG | domain default-k8s-diff-port-568677 has defined MAC address 52:54:00:45:b3:97 in network mk-default-k8s-diff-port-568677
	I0205 03:25:09.553285   68832 main.go:141] libmachine: (default-k8s-diff-port-568677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:b3:97", ip: ""} in network mk-default-k8s-diff-port-568677: {Iface:virbr1 ExpiryTime:2025-02-05 04:24:44 +0000 UTC Type:0 Mac:52:54:00:45:b3:97 Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-568677 Clientid:01:52:54:00:45:b3:97}
	I0205 03:25:09.553310   68832 main.go:141] libmachine: (default-k8s-diff-port-568677) DBG | domain default-k8s-diff-port-568677 has defined IP address 192.168.72.253 and MAC address 52:54:00:45:b3:97 in network mk-default-k8s-diff-port-568677
	I0205 03:25:09.553621   68832 main.go:141] libmachine: (default-k8s-diff-port-568677) Calling .GetSSHPort
	I0205 03:25:09.553783   68832 main.go:141] libmachine: (default-k8s-diff-port-568677) Calling .GetSSHKeyPath
	I0205 03:25:09.553924   68832 main.go:141] libmachine: (default-k8s-diff-port-568677) Calling .GetSSHUsername
	I0205 03:25:09.554111   68832 sshutil.go:53] new ssh client: &{IP:192.168.72.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/default-k8s-diff-port-568677/id_rsa Username:docker}
	I0205 03:25:09.555660   68832 main.go:141] libmachine: (default-k8s-diff-port-568677) DBG | domain default-k8s-diff-port-568677 has defined MAC address 52:54:00:45:b3:97 in network mk-default-k8s-diff-port-568677
	I0205 03:25:09.556073   68832 main.go:141] libmachine: (default-k8s-diff-port-568677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:b3:97", ip: ""} in network mk-default-k8s-diff-port-568677: {Iface:virbr1 ExpiryTime:2025-02-05 04:24:44 +0000 UTC Type:0 Mac:52:54:00:45:b3:97 Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-568677 Clientid:01:52:54:00:45:b3:97}
	I0205 03:25:09.556098   68832 main.go:141] libmachine: (default-k8s-diff-port-568677) DBG | domain default-k8s-diff-port-568677 has defined IP address 192.168.72.253 and MAC address 52:54:00:45:b3:97 in network mk-default-k8s-diff-port-568677
	I0205 03:25:09.556387   68832 main.go:141] libmachine: (default-k8s-diff-port-568677) Calling .GetSSHPort
	I0205 03:25:09.556562   68832 main.go:141] libmachine: (default-k8s-diff-port-568677) Calling .GetSSHKeyPath
	I0205 03:25:09.556717   68832 main.go:141] libmachine: (default-k8s-diff-port-568677) Calling .GetSSHUsername
	I0205 03:25:09.556851   68832 sshutil.go:53] new ssh client: &{IP:192.168.72.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/default-k8s-diff-port-568677/id_rsa Username:docker}
	I0205 03:25:09.577239   68832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40835
	I0205 03:25:09.577936   68832 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:25:09.578439   68832 main.go:141] libmachine: Using API Version  1
	I0205 03:25:09.578459   68832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:25:09.578892   68832 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:25:09.579073   68832 main.go:141] libmachine: (default-k8s-diff-port-568677) Calling .GetState
	I0205 03:25:09.581050   68832 main.go:141] libmachine: (default-k8s-diff-port-568677) Calling .DriverName
	I0205 03:25:09.581290   68832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37533
	I0205 03:25:09.581847   68832 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:25:09.582389   68832 main.go:141] libmachine: Using API Version  1
	I0205 03:25:09.582412   68832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:25:09.582496   68832 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0205 03:25:09.582510   68832 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0205 03:25:09.582526   68832 main.go:141] libmachine: (default-k8s-diff-port-568677) Calling .GetSSHHostname
	I0205 03:25:09.582869   68832 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:25:09.583030   68832 main.go:141] libmachine: (default-k8s-diff-port-568677) Calling .GetState
	I0205 03:25:09.584793   68832 main.go:141] libmachine: (default-k8s-diff-port-568677) Calling .DriverName
	I0205 03:25:09.586533   68832 main.go:141] libmachine: (default-k8s-diff-port-568677) DBG | domain default-k8s-diff-port-568677 has defined MAC address 52:54:00:45:b3:97 in network mk-default-k8s-diff-port-568677
	I0205 03:25:09.586870   68832 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0205 03:25:09.587029   68832 main.go:141] libmachine: (default-k8s-diff-port-568677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:b3:97", ip: ""} in network mk-default-k8s-diff-port-568677: {Iface:virbr1 ExpiryTime:2025-02-05 04:24:44 +0000 UTC Type:0 Mac:52:54:00:45:b3:97 Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-568677 Clientid:01:52:54:00:45:b3:97}
	I0205 03:25:09.587060   68832 main.go:141] libmachine: (default-k8s-diff-port-568677) DBG | domain default-k8s-diff-port-568677 has defined IP address 192.168.72.253 and MAC address 52:54:00:45:b3:97 in network mk-default-k8s-diff-port-568677
	I0205 03:25:09.587316   68832 main.go:141] libmachine: (default-k8s-diff-port-568677) Calling .GetSSHPort
	I0205 03:25:09.587518   68832 main.go:141] libmachine: (default-k8s-diff-port-568677) Calling .GetSSHKeyPath
	I0205 03:25:09.587746   68832 main.go:141] libmachine: (default-k8s-diff-port-568677) Calling .GetSSHUsername
	I0205 03:25:09.587914   68832 sshutil.go:53] new ssh client: &{IP:192.168.72.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/default-k8s-diff-port-568677/id_rsa Username:docker}
	I0205 03:25:09.588230   68832 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0205 03:25:09.588241   68832 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0205 03:25:09.588254   68832 main.go:141] libmachine: (default-k8s-diff-port-568677) Calling .GetSSHHostname
	I0205 03:25:09.604232   68832 main.go:141] libmachine: (default-k8s-diff-port-568677) DBG | domain default-k8s-diff-port-568677 has defined MAC address 52:54:00:45:b3:97 in network mk-default-k8s-diff-port-568677
	I0205 03:25:09.604449   68832 main.go:141] libmachine: (default-k8s-diff-port-568677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:b3:97", ip: ""} in network mk-default-k8s-diff-port-568677: {Iface:virbr1 ExpiryTime:2025-02-05 04:24:44 +0000 UTC Type:0 Mac:52:54:00:45:b3:97 Iaid: IPaddr:192.168.72.253 Prefix:24 Hostname:default-k8s-diff-port-568677 Clientid:01:52:54:00:45:b3:97}
	I0205 03:25:09.604664   68832 main.go:141] libmachine: (default-k8s-diff-port-568677) DBG | domain default-k8s-diff-port-568677 has defined IP address 192.168.72.253 and MAC address 52:54:00:45:b3:97 in network mk-default-k8s-diff-port-568677
	I0205 03:25:09.604730   68832 main.go:141] libmachine: (default-k8s-diff-port-568677) Calling .GetSSHPort
	I0205 03:25:09.604923   68832 main.go:141] libmachine: (default-k8s-diff-port-568677) Calling .GetSSHKeyPath
	I0205 03:25:09.605083   68832 main.go:141] libmachine: (default-k8s-diff-port-568677) Calling .GetSSHUsername
	I0205 03:25:09.605238   68832 sshutil.go:53] new ssh client: &{IP:192.168.72.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/default-k8s-diff-port-568677/id_rsa Username:docker}
	I0205 03:25:09.770359   68832 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0205 03:25:09.797412   68832 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-568677" to be "Ready" ...
	I0205 03:25:09.867415   68832 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0205 03:25:09.867441   68832 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0205 03:25:09.880484   68832 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0205 03:25:09.908409   68832 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0205 03:25:09.927620   68832 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0205 03:25:09.927647   68832 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0205 03:25:09.988128   68832 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0205 03:25:09.988164   68832 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0205 03:25:10.002488   68832 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0205 03:25:10.002516   68832 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0205 03:25:10.038471   68832 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0205 03:25:10.038500   68832 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0205 03:25:10.059867   68832 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0205 03:25:10.059896   68832 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0205 03:25:10.096172   68832 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0205 03:25:10.096204   68832 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0205 03:25:10.097862   68832 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0205 03:25:10.097898   68832 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0205 03:25:10.125226   68832 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0205 03:25:10.125265   68832 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0205 03:25:10.179150   68832 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0205 03:25:10.180573   68832 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0205 03:25:10.180599   68832 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0205 03:25:10.237428   68832 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0205 03:25:10.237462   68832 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0205 03:25:10.284322   68832 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0205 03:25:10.284346   68832 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0205 03:25:10.332807   68832 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0205 03:25:10.335046   68832 main.go:141] libmachine: Making call to close driver server
	I0205 03:25:10.335071   68832 main.go:141] libmachine: (default-k8s-diff-port-568677) Calling .Close
	I0205 03:25:10.335351   68832 main.go:141] libmachine: Successfully made call to close driver server
	I0205 03:25:10.335368   68832 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 03:25:10.335375   68832 main.go:141] libmachine: Making call to close driver server
	I0205 03:25:10.335382   68832 main.go:141] libmachine: (default-k8s-diff-port-568677) Calling .Close
	I0205 03:25:10.335646   68832 main.go:141] libmachine: Successfully made call to close driver server
	I0205 03:25:10.335664   68832 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 03:25:10.335688   68832 main.go:141] libmachine: (default-k8s-diff-port-568677) DBG | Closing plugin on server side
	I0205 03:25:10.341503   68832 main.go:141] libmachine: Making call to close driver server
	I0205 03:25:10.341525   68832 main.go:141] libmachine: (default-k8s-diff-port-568677) Calling .Close
	I0205 03:25:10.341794   68832 main.go:141] libmachine: Successfully made call to close driver server
	I0205 03:25:10.341814   68832 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 03:25:10.341831   68832 main.go:141] libmachine: (default-k8s-diff-port-568677) DBG | Closing plugin on server side
	I0205 03:25:11.336562   68832 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.42810736s)
	I0205 03:25:11.336621   68832 main.go:141] libmachine: Making call to close driver server
	I0205 03:25:11.336642   68832 main.go:141] libmachine: (default-k8s-diff-port-568677) Calling .Close
	I0205 03:25:11.336939   68832 main.go:141] libmachine: Successfully made call to close driver server
	I0205 03:25:11.336957   68832 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 03:25:11.336965   68832 main.go:141] libmachine: Making call to close driver server
	I0205 03:25:11.336973   68832 main.go:141] libmachine: (default-k8s-diff-port-568677) Calling .Close
	I0205 03:25:11.337233   68832 main.go:141] libmachine: (default-k8s-diff-port-568677) DBG | Closing plugin on server side
	I0205 03:25:11.337267   68832 main.go:141] libmachine: Successfully made call to close driver server
	I0205 03:25:11.337274   68832 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 03:25:11.368442   68832 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.189243357s)
	I0205 03:25:11.368508   68832 main.go:141] libmachine: Making call to close driver server
	I0205 03:25:11.368527   68832 main.go:141] libmachine: (default-k8s-diff-port-568677) Calling .Close
	I0205 03:25:11.368849   68832 main.go:141] libmachine: (default-k8s-diff-port-568677) DBG | Closing plugin on server side
	I0205 03:25:11.368890   68832 main.go:141] libmachine: Successfully made call to close driver server
	I0205 03:25:11.368899   68832 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 03:25:11.368913   68832 main.go:141] libmachine: Making call to close driver server
	I0205 03:25:11.368921   68832 main.go:141] libmachine: (default-k8s-diff-port-568677) Calling .Close
	I0205 03:25:11.369256   68832 main.go:141] libmachine: (default-k8s-diff-port-568677) DBG | Closing plugin on server side
	I0205 03:25:11.369278   68832 main.go:141] libmachine: Successfully made call to close driver server
	I0205 03:25:11.369293   68832 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 03:25:11.369312   68832 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-568677"
	I0205 03:25:11.727024   68832 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.394178716s)
	I0205 03:25:11.727069   68832 main.go:141] libmachine: Making call to close driver server
	I0205 03:25:11.727082   68832 main.go:141] libmachine: (default-k8s-diff-port-568677) Calling .Close
	I0205 03:25:11.727414   68832 main.go:141] libmachine: (default-k8s-diff-port-568677) DBG | Closing plugin on server side
	I0205 03:25:11.727464   68832 main.go:141] libmachine: Successfully made call to close driver server
	I0205 03:25:11.727475   68832 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 03:25:11.727484   68832 main.go:141] libmachine: Making call to close driver server
	I0205 03:25:11.727509   68832 main.go:141] libmachine: (default-k8s-diff-port-568677) Calling .Close
	I0205 03:25:11.727767   68832 main.go:141] libmachine: Successfully made call to close driver server
	I0205 03:25:11.727802   68832 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 03:25:11.727812   68832 main.go:141] libmachine: (default-k8s-diff-port-568677) DBG | Closing plugin on server side
	I0205 03:25:11.730303   68832 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-568677 addons enable metrics-server
	
	I0205 03:25:11.731552   68832 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0205 03:25:07.575922   70401 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0205 03:25:07.576098   70401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:25:07.576169   70401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:25:07.592096   70401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39351
	I0205 03:25:07.592575   70401 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:25:07.593257   70401 main.go:141] libmachine: Using API Version  1
	I0205 03:25:07.593293   70401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:25:07.593729   70401 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:25:07.593950   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetMachineName
	I0205 03:25:07.594103   70401 main.go:141] libmachine: (kindnet-253147) Calling .DriverName
	I0205 03:25:07.594269   70401 start.go:159] libmachine.API.Create for "kindnet-253147" (driver="kvm2")
	I0205 03:25:07.594299   70401 client.go:168] LocalClient.Create starting
	I0205 03:25:07.594335   70401 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem
	I0205 03:25:07.594371   70401 main.go:141] libmachine: Decoding PEM data...
	I0205 03:25:07.594390   70401 main.go:141] libmachine: Parsing certificate...
	I0205 03:25:07.594465   70401 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20363-12788/.minikube/certs/cert.pem
	I0205 03:25:07.594491   70401 main.go:141] libmachine: Decoding PEM data...
	I0205 03:25:07.594516   70401 main.go:141] libmachine: Parsing certificate...
	I0205 03:25:07.594548   70401 main.go:141] libmachine: Running pre-create checks...
	I0205 03:25:07.594561   70401 main.go:141] libmachine: (kindnet-253147) Calling .PreCreateCheck
	I0205 03:25:07.594913   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetConfigRaw
	I0205 03:25:07.595342   70401 main.go:141] libmachine: Creating machine...
	I0205 03:25:07.595355   70401 main.go:141] libmachine: (kindnet-253147) Calling .Create
	I0205 03:25:07.595482   70401 main.go:141] libmachine: (kindnet-253147) creating KVM machine...
	I0205 03:25:07.595502   70401 main.go:141] libmachine: (kindnet-253147) creating network...
	I0205 03:25:07.596907   70401 main.go:141] libmachine: (kindnet-253147) DBG | found existing default KVM network
	I0205 03:25:07.598198   70401 main.go:141] libmachine: (kindnet-253147) DBG | I0205 03:25:07.598025   70424 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:e6:1a:05} reservation:<nil>}
	I0205 03:25:07.599346   70401 main.go:141] libmachine: (kindnet-253147) DBG | I0205 03:25:07.599249   70424 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000117ef0}
	I0205 03:25:07.599390   70401 main.go:141] libmachine: (kindnet-253147) DBG | created network xml: 
	I0205 03:25:07.599412   70401 main.go:141] libmachine: (kindnet-253147) DBG | <network>
	I0205 03:25:07.599425   70401 main.go:141] libmachine: (kindnet-253147) DBG |   <name>mk-kindnet-253147</name>
	I0205 03:25:07.599436   70401 main.go:141] libmachine: (kindnet-253147) DBG |   <dns enable='no'/>
	I0205 03:25:07.599448   70401 main.go:141] libmachine: (kindnet-253147) DBG |   
	I0205 03:25:07.599461   70401 main.go:141] libmachine: (kindnet-253147) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0205 03:25:07.599470   70401 main.go:141] libmachine: (kindnet-253147) DBG |     <dhcp>
	I0205 03:25:07.599480   70401 main.go:141] libmachine: (kindnet-253147) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0205 03:25:07.599499   70401 main.go:141] libmachine: (kindnet-253147) DBG |     </dhcp>
	I0205 03:25:07.599517   70401 main.go:141] libmachine: (kindnet-253147) DBG |   </ip>
	I0205 03:25:07.599528   70401 main.go:141] libmachine: (kindnet-253147) DBG |   
	I0205 03:25:07.599541   70401 main.go:141] libmachine: (kindnet-253147) DBG | </network>
	I0205 03:25:07.599577   70401 main.go:141] libmachine: (kindnet-253147) DBG | 
	I0205 03:25:07.605138   70401 main.go:141] libmachine: (kindnet-253147) DBG | trying to create private KVM network mk-kindnet-253147 192.168.50.0/24...
	I0205 03:25:07.681432   70401 main.go:141] libmachine: (kindnet-253147) DBG | private KVM network mk-kindnet-253147 192.168.50.0/24 created
	I0205 03:25:07.681529   70401 main.go:141] libmachine: (kindnet-253147) setting up store path in /home/jenkins/minikube-integration/20363-12788/.minikube/machines/kindnet-253147 ...
	I0205 03:25:07.681575   70401 main.go:141] libmachine: (kindnet-253147) building disk image from file:///home/jenkins/minikube-integration/20363-12788/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0205 03:25:07.681592   70401 main.go:141] libmachine: (kindnet-253147) DBG | I0205 03:25:07.681540   70424 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20363-12788/.minikube
	I0205 03:25:07.681656   70401 main.go:141] libmachine: (kindnet-253147) Downloading /home/jenkins/minikube-integration/20363-12788/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20363-12788/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0205 03:25:07.951475   70401 main.go:141] libmachine: (kindnet-253147) DBG | I0205 03:25:07.951338   70424 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20363-12788/.minikube/machines/kindnet-253147/id_rsa...
	I0205 03:25:07.990397   70401 main.go:141] libmachine: (kindnet-253147) DBG | I0205 03:25:07.990227   70424 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20363-12788/.minikube/machines/kindnet-253147/kindnet-253147.rawdisk...
	I0205 03:25:07.990437   70401 main.go:141] libmachine: (kindnet-253147) DBG | Writing magic tar header
	I0205 03:25:07.990452   70401 main.go:141] libmachine: (kindnet-253147) setting executable bit set on /home/jenkins/minikube-integration/20363-12788/.minikube/machines/kindnet-253147 (perms=drwx------)
	I0205 03:25:07.990470   70401 main.go:141] libmachine: (kindnet-253147) setting executable bit set on /home/jenkins/minikube-integration/20363-12788/.minikube/machines (perms=drwxr-xr-x)
	I0205 03:25:07.990485   70401 main.go:141] libmachine: (kindnet-253147) DBG | Writing SSH key tar header
	I0205 03:25:07.990503   70401 main.go:141] libmachine: (kindnet-253147) DBG | I0205 03:25:07.990337   70424 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20363-12788/.minikube/machines/kindnet-253147 ...
	I0205 03:25:07.990513   70401 main.go:141] libmachine: (kindnet-253147) setting executable bit set on /home/jenkins/minikube-integration/20363-12788/.minikube (perms=drwxr-xr-x)
	I0205 03:25:07.990527   70401 main.go:141] libmachine: (kindnet-253147) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20363-12788/.minikube/machines/kindnet-253147
	I0205 03:25:07.990538   70401 main.go:141] libmachine: (kindnet-253147) setting executable bit set on /home/jenkins/minikube-integration/20363-12788 (perms=drwxrwxr-x)
	I0205 03:25:07.990565   70401 main.go:141] libmachine: (kindnet-253147) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0205 03:25:07.990579   70401 main.go:141] libmachine: (kindnet-253147) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0205 03:25:07.990592   70401 main.go:141] libmachine: (kindnet-253147) creating domain...
	I0205 03:25:07.990611   70401 main.go:141] libmachine: (kindnet-253147) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20363-12788/.minikube/machines
	I0205 03:25:07.990633   70401 main.go:141] libmachine: (kindnet-253147) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20363-12788/.minikube
	I0205 03:25:07.990645   70401 main.go:141] libmachine: (kindnet-253147) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20363-12788
	I0205 03:25:07.990656   70401 main.go:141] libmachine: (kindnet-253147) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0205 03:25:07.990666   70401 main.go:141] libmachine: (kindnet-253147) DBG | checking permissions on dir: /home/jenkins
	I0205 03:25:07.990674   70401 main.go:141] libmachine: (kindnet-253147) DBG | checking permissions on dir: /home
	I0205 03:25:07.990684   70401 main.go:141] libmachine: (kindnet-253147) DBG | skipping /home - not owner
	I0205 03:25:07.991691   70401 main.go:141] libmachine: (kindnet-253147) define libvirt domain using xml: 
	I0205 03:25:07.991718   70401 main.go:141] libmachine: (kindnet-253147) <domain type='kvm'>
	I0205 03:25:07.991741   70401 main.go:141] libmachine: (kindnet-253147)   <name>kindnet-253147</name>
	I0205 03:25:07.991751   70401 main.go:141] libmachine: (kindnet-253147)   <memory unit='MiB'>3072</memory>
	I0205 03:25:07.991758   70401 main.go:141] libmachine: (kindnet-253147)   <vcpu>2</vcpu>
	I0205 03:25:07.991772   70401 main.go:141] libmachine: (kindnet-253147)   <features>
	I0205 03:25:07.991815   70401 main.go:141] libmachine: (kindnet-253147)     <acpi/>
	I0205 03:25:07.991858   70401 main.go:141] libmachine: (kindnet-253147)     <apic/>
	I0205 03:25:07.991868   70401 main.go:141] libmachine: (kindnet-253147)     <pae/>
	I0205 03:25:07.991873   70401 main.go:141] libmachine: (kindnet-253147)     
	I0205 03:25:07.991880   70401 main.go:141] libmachine: (kindnet-253147)   </features>
	I0205 03:25:07.991887   70401 main.go:141] libmachine: (kindnet-253147)   <cpu mode='host-passthrough'>
	I0205 03:25:07.991894   70401 main.go:141] libmachine: (kindnet-253147)   
	I0205 03:25:07.991900   70401 main.go:141] libmachine: (kindnet-253147)   </cpu>
	I0205 03:25:07.991910   70401 main.go:141] libmachine: (kindnet-253147)   <os>
	I0205 03:25:07.991919   70401 main.go:141] libmachine: (kindnet-253147)     <type>hvm</type>
	I0205 03:25:07.991927   70401 main.go:141] libmachine: (kindnet-253147)     <boot dev='cdrom'/>
	I0205 03:25:07.991942   70401 main.go:141] libmachine: (kindnet-253147)     <boot dev='hd'/>
	I0205 03:25:07.991954   70401 main.go:141] libmachine: (kindnet-253147)     <bootmenu enable='no'/>
	I0205 03:25:07.991967   70401 main.go:141] libmachine: (kindnet-253147)   </os>
	I0205 03:25:07.991978   70401 main.go:141] libmachine: (kindnet-253147)   <devices>
	I0205 03:25:07.991988   70401 main.go:141] libmachine: (kindnet-253147)     <disk type='file' device='cdrom'>
	I0205 03:25:07.992000   70401 main.go:141] libmachine: (kindnet-253147)       <source file='/home/jenkins/minikube-integration/20363-12788/.minikube/machines/kindnet-253147/boot2docker.iso'/>
	I0205 03:25:07.992012   70401 main.go:141] libmachine: (kindnet-253147)       <target dev='hdc' bus='scsi'/>
	I0205 03:25:07.992021   70401 main.go:141] libmachine: (kindnet-253147)       <readonly/>
	I0205 03:25:07.992027   70401 main.go:141] libmachine: (kindnet-253147)     </disk>
	I0205 03:25:07.992037   70401 main.go:141] libmachine: (kindnet-253147)     <disk type='file' device='disk'>
	I0205 03:25:07.992047   70401 main.go:141] libmachine: (kindnet-253147)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0205 03:25:07.992068   70401 main.go:141] libmachine: (kindnet-253147)       <source file='/home/jenkins/minikube-integration/20363-12788/.minikube/machines/kindnet-253147/kindnet-253147.rawdisk'/>
	I0205 03:25:07.992082   70401 main.go:141] libmachine: (kindnet-253147)       <target dev='hda' bus='virtio'/>
	I0205 03:25:07.992093   70401 main.go:141] libmachine: (kindnet-253147)     </disk>
	I0205 03:25:07.992098   70401 main.go:141] libmachine: (kindnet-253147)     <interface type='network'>
	I0205 03:25:07.992114   70401 main.go:141] libmachine: (kindnet-253147)       <source network='mk-kindnet-253147'/>
	I0205 03:25:07.992128   70401 main.go:141] libmachine: (kindnet-253147)       <model type='virtio'/>
	I0205 03:25:07.992139   70401 main.go:141] libmachine: (kindnet-253147)     </interface>
	I0205 03:25:07.992157   70401 main.go:141] libmachine: (kindnet-253147)     <interface type='network'>
	I0205 03:25:07.992164   70401 main.go:141] libmachine: (kindnet-253147)       <source network='default'/>
	I0205 03:25:07.992171   70401 main.go:141] libmachine: (kindnet-253147)       <model type='virtio'/>
	I0205 03:25:07.992178   70401 main.go:141] libmachine: (kindnet-253147)     </interface>
	I0205 03:25:07.992188   70401 main.go:141] libmachine: (kindnet-253147)     <serial type='pty'>
	I0205 03:25:07.992203   70401 main.go:141] libmachine: (kindnet-253147)       <target port='0'/>
	I0205 03:25:07.992213   70401 main.go:141] libmachine: (kindnet-253147)     </serial>
	I0205 03:25:07.992222   70401 main.go:141] libmachine: (kindnet-253147)     <console type='pty'>
	I0205 03:25:07.992229   70401 main.go:141] libmachine: (kindnet-253147)       <target type='serial' port='0'/>
	I0205 03:25:07.992238   70401 main.go:141] libmachine: (kindnet-253147)     </console>
	I0205 03:25:07.992247   70401 main.go:141] libmachine: (kindnet-253147)     <rng model='virtio'>
	I0205 03:25:07.992260   70401 main.go:141] libmachine: (kindnet-253147)       <backend model='random'>/dev/random</backend>
	I0205 03:25:07.992271   70401 main.go:141] libmachine: (kindnet-253147)     </rng>
	I0205 03:25:07.992284   70401 main.go:141] libmachine: (kindnet-253147)     
	I0205 03:25:07.992295   70401 main.go:141] libmachine: (kindnet-253147)     
	I0205 03:25:07.992303   70401 main.go:141] libmachine: (kindnet-253147)   </devices>
	I0205 03:25:07.992319   70401 main.go:141] libmachine: (kindnet-253147) </domain>
	I0205 03:25:07.992333   70401 main.go:141] libmachine: (kindnet-253147) 
	I0205 03:25:07.996091   70401 main.go:141] libmachine: (kindnet-253147) DBG | domain kindnet-253147 has defined MAC address 52:54:00:c7:7d:c4 in network default
	I0205 03:25:07.996687   70401 main.go:141] libmachine: (kindnet-253147) starting domain...
	I0205 03:25:07.996710   70401 main.go:141] libmachine: (kindnet-253147) DBG | domain kindnet-253147 has defined MAC address 52:54:00:d4:5d:9a in network mk-kindnet-253147
	I0205 03:25:07.996718   70401 main.go:141] libmachine: (kindnet-253147) ensuring networks are active...
	I0205 03:25:07.997619   70401 main.go:141] libmachine: (kindnet-253147) Ensuring network default is active
	I0205 03:25:07.997952   70401 main.go:141] libmachine: (kindnet-253147) Ensuring network mk-kindnet-253147 is active
	I0205 03:25:07.998436   70401 main.go:141] libmachine: (kindnet-253147) getting domain XML...
	I0205 03:25:07.999283   70401 main.go:141] libmachine: (kindnet-253147) creating domain...
	I0205 03:25:09.276733   70401 main.go:141] libmachine: (kindnet-253147) waiting for IP...
	I0205 03:25:09.277801   70401 main.go:141] libmachine: (kindnet-253147) DBG | domain kindnet-253147 has defined MAC address 52:54:00:d4:5d:9a in network mk-kindnet-253147
	I0205 03:25:09.278237   70401 main.go:141] libmachine: (kindnet-253147) DBG | unable to find current IP address of domain kindnet-253147 in network mk-kindnet-253147
	I0205 03:25:09.278302   70401 main.go:141] libmachine: (kindnet-253147) DBG | I0205 03:25:09.278238   70424 retry.go:31] will retry after 245.801996ms: waiting for domain to come up
	I0205 03:25:09.533992   70401 main.go:141] libmachine: (kindnet-253147) DBG | domain kindnet-253147 has defined MAC address 52:54:00:d4:5d:9a in network mk-kindnet-253147
	I0205 03:25:09.540954   70401 main.go:141] libmachine: (kindnet-253147) DBG | unable to find current IP address of domain kindnet-253147 in network mk-kindnet-253147
	I0205 03:25:09.540978   70401 main.go:141] libmachine: (kindnet-253147) DBG | I0205 03:25:09.540864   70424 retry.go:31] will retry after 307.327682ms: waiting for domain to come up
	I0205 03:25:09.849503   70401 main.go:141] libmachine: (kindnet-253147) DBG | domain kindnet-253147 has defined MAC address 52:54:00:d4:5d:9a in network mk-kindnet-253147
	I0205 03:25:09.850055   70401 main.go:141] libmachine: (kindnet-253147) DBG | unable to find current IP address of domain kindnet-253147 in network mk-kindnet-253147
	I0205 03:25:09.850082   70401 main.go:141] libmachine: (kindnet-253147) DBG | I0205 03:25:09.850038   70424 retry.go:31] will retry after 422.886883ms: waiting for domain to come up
	I0205 03:25:10.274778   70401 main.go:141] libmachine: (kindnet-253147) DBG | domain kindnet-253147 has defined MAC address 52:54:00:d4:5d:9a in network mk-kindnet-253147
	I0205 03:25:10.275346   70401 main.go:141] libmachine: (kindnet-253147) DBG | unable to find current IP address of domain kindnet-253147 in network mk-kindnet-253147
	I0205 03:25:10.275408   70401 main.go:141] libmachine: (kindnet-253147) DBG | I0205 03:25:10.275332   70424 retry.go:31] will retry after 431.728102ms: waiting for domain to come up
	I0205 03:25:10.708500   70401 main.go:141] libmachine: (kindnet-253147) DBG | domain kindnet-253147 has defined MAC address 52:54:00:d4:5d:9a in network mk-kindnet-253147
	I0205 03:25:10.708982   70401 main.go:141] libmachine: (kindnet-253147) DBG | unable to find current IP address of domain kindnet-253147 in network mk-kindnet-253147
	I0205 03:25:10.709013   70401 main.go:141] libmachine: (kindnet-253147) DBG | I0205 03:25:10.708957   70424 retry.go:31] will retry after 551.977154ms: waiting for domain to come up
	I0205 03:25:11.262721   70401 main.go:141] libmachine: (kindnet-253147) DBG | domain kindnet-253147 has defined MAC address 52:54:00:d4:5d:9a in network mk-kindnet-253147
	I0205 03:25:11.263186   70401 main.go:141] libmachine: (kindnet-253147) DBG | unable to find current IP address of domain kindnet-253147 in network mk-kindnet-253147
	I0205 03:25:11.263219   70401 main.go:141] libmachine: (kindnet-253147) DBG | I0205 03:25:11.263148   70424 retry.go:31] will retry after 818.53947ms: waiting for domain to come up
	I0205 03:25:12.083420   70401 main.go:141] libmachine: (kindnet-253147) DBG | domain kindnet-253147 has defined MAC address 52:54:00:d4:5d:9a in network mk-kindnet-253147
	I0205 03:25:12.083928   70401 main.go:141] libmachine: (kindnet-253147) DBG | unable to find current IP address of domain kindnet-253147 in network mk-kindnet-253147
	I0205 03:25:12.083949   70401 main.go:141] libmachine: (kindnet-253147) DBG | I0205 03:25:12.083896   70424 retry.go:31] will retry after 817.186164ms: waiting for domain to come up
	I0205 03:25:11.732670   68832 addons.go:514] duration metric: took 2.247220465s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0205 03:25:11.801912   68832 node_ready.go:53] node "default-k8s-diff-port-568677" has status "Ready":"False"
	I0205 03:25:12.902447   70401 main.go:141] libmachine: (kindnet-253147) DBG | domain kindnet-253147 has defined MAC address 52:54:00:d4:5d:9a in network mk-kindnet-253147
	I0205 03:25:12.902956   70401 main.go:141] libmachine: (kindnet-253147) DBG | unable to find current IP address of domain kindnet-253147 in network mk-kindnet-253147
	I0205 03:25:12.902985   70401 main.go:141] libmachine: (kindnet-253147) DBG | I0205 03:25:12.902926   70424 retry.go:31] will retry after 1.394565698s: waiting for domain to come up
	I0205 03:25:14.299660   70401 main.go:141] libmachine: (kindnet-253147) DBG | domain kindnet-253147 has defined MAC address 52:54:00:d4:5d:9a in network mk-kindnet-253147
	I0205 03:25:14.300193   70401 main.go:141] libmachine: (kindnet-253147) DBG | unable to find current IP address of domain kindnet-253147 in network mk-kindnet-253147
	I0205 03:25:14.300274   70401 main.go:141] libmachine: (kindnet-253147) DBG | I0205 03:25:14.300187   70424 retry.go:31] will retry after 1.212173366s: waiting for domain to come up
	I0205 03:25:15.514475   70401 main.go:141] libmachine: (kindnet-253147) DBG | domain kindnet-253147 has defined MAC address 52:54:00:d4:5d:9a in network mk-kindnet-253147
	I0205 03:25:15.514886   70401 main.go:141] libmachine: (kindnet-253147) DBG | unable to find current IP address of domain kindnet-253147 in network mk-kindnet-253147
	I0205 03:25:15.514920   70401 main.go:141] libmachine: (kindnet-253147) DBG | I0205 03:25:15.514872   70424 retry.go:31] will retry after 2.327310444s: waiting for domain to come up
	I0205 03:25:14.302597   68832 node_ready.go:53] node "default-k8s-diff-port-568677" has status "Ready":"False"
	I0205 03:25:16.801138   68832 node_ready.go:49] node "default-k8s-diff-port-568677" has status "Ready":"True"
	I0205 03:25:16.801176   68832 node_ready.go:38] duration metric: took 7.003715945s for node "default-k8s-diff-port-568677" to be "Ready" ...
	I0205 03:25:16.801191   68832 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0205 03:25:16.805515   68832 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-k62ng" in "kube-system" namespace to be "Ready" ...
	I0205 03:25:16.811351   68832 pod_ready.go:93] pod "coredns-668d6bf9bc-k62ng" in "kube-system" namespace has status "Ready":"True"
	I0205 03:25:16.811375   68832 pod_ready.go:82] duration metric: took 5.833268ms for pod "coredns-668d6bf9bc-k62ng" in "kube-system" namespace to be "Ready" ...
	I0205 03:25:16.811387   68832 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-568677" in "kube-system" namespace to be "Ready" ...
	I0205 03:25:16.816074   68832 pod_ready.go:93] pod "etcd-default-k8s-diff-port-568677" in "kube-system" namespace has status "Ready":"True"
	I0205 03:25:16.816086   68832 pod_ready.go:82] duration metric: took 4.684167ms for pod "etcd-default-k8s-diff-port-568677" in "kube-system" namespace to be "Ready" ...
	I0205 03:25:16.816094   68832 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-568677" in "kube-system" namespace to be "Ready" ...
	I0205 03:25:15.186032   64850 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0205 03:25:15.186110   64850 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0205 03:25:15.186434   64850 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0205 03:25:17.843679   70401 main.go:141] libmachine: (kindnet-253147) DBG | domain kindnet-253147 has defined MAC address 52:54:00:d4:5d:9a in network mk-kindnet-253147
	I0205 03:25:17.844330   70401 main.go:141] libmachine: (kindnet-253147) DBG | unable to find current IP address of domain kindnet-253147 in network mk-kindnet-253147
	I0205 03:25:17.844365   70401 main.go:141] libmachine: (kindnet-253147) DBG | I0205 03:25:17.844295   70424 retry.go:31] will retry after 2.368963296s: waiting for domain to come up
	I0205 03:25:20.214872   70401 main.go:141] libmachine: (kindnet-253147) DBG | domain kindnet-253147 has defined MAC address 52:54:00:d4:5d:9a in network mk-kindnet-253147
	I0205 03:25:20.215538   70401 main.go:141] libmachine: (kindnet-253147) DBG | unable to find current IP address of domain kindnet-253147 in network mk-kindnet-253147
	I0205 03:25:20.215576   70401 main.go:141] libmachine: (kindnet-253147) DBG | I0205 03:25:20.215490   70424 retry.go:31] will retry after 3.165724706s: waiting for domain to come up
	I0205 03:25:18.822359   68832 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-568677" in "kube-system" namespace has status "Ready":"False"
	I0205 03:25:21.321199   68832 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-568677" in "kube-system" namespace has status "Ready":"True"
	I0205 03:25:21.321224   68832 pod_ready.go:82] duration metric: took 4.505122565s for pod "kube-apiserver-default-k8s-diff-port-568677" in "kube-system" namespace to be "Ready" ...
	I0205 03:25:21.321240   68832 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-568677" in "kube-system" namespace to be "Ready" ...
	I0205 03:25:21.325263   68832 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-568677" in "kube-system" namespace has status "Ready":"True"
	I0205 03:25:21.325286   68832 pod_ready.go:82] duration metric: took 4.037074ms for pod "kube-controller-manager-default-k8s-diff-port-568677" in "kube-system" namespace to be "Ready" ...
	I0205 03:25:21.325298   68832 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2fbxx" in "kube-system" namespace to be "Ready" ...
	I0205 03:25:21.328967   68832 pod_ready.go:93] pod "kube-proxy-2fbxx" in "kube-system" namespace has status "Ready":"True"
	I0205 03:25:21.328984   68832 pod_ready.go:82] duration metric: took 3.678446ms for pod "kube-proxy-2fbxx" in "kube-system" namespace to be "Ready" ...
	I0205 03:25:21.328992   68832 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-568677" in "kube-system" namespace to be "Ready" ...
	I0205 03:25:21.332077   68832 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-568677" in "kube-system" namespace has status "Ready":"True"
	I0205 03:25:21.332092   68832 pod_ready.go:82] duration metric: took 3.094831ms for pod "kube-scheduler-default-k8s-diff-port-568677" in "kube-system" namespace to be "Ready" ...
	I0205 03:25:21.332100   68832 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-f79f97bbb-k9q9v" in "kube-system" namespace to be "Ready" ...
	I0205 03:25:23.337273   68832 pod_ready.go:103] pod "metrics-server-f79f97bbb-k9q9v" in "kube-system" namespace has status "Ready":"False"
	I0205 03:25:20.187049   64850 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0205 03:25:20.187366   64850 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0205 03:25:23.382868   70401 main.go:141] libmachine: (kindnet-253147) DBG | domain kindnet-253147 has defined MAC address 52:54:00:d4:5d:9a in network mk-kindnet-253147
	I0205 03:25:23.383477   70401 main.go:141] libmachine: (kindnet-253147) DBG | unable to find current IP address of domain kindnet-253147 in network mk-kindnet-253147
	I0205 03:25:23.383504   70401 main.go:141] libmachine: (kindnet-253147) DBG | I0205 03:25:23.383444   70424 retry.go:31] will retry after 3.884227959s: waiting for domain to come up
	I0205 03:25:27.272703   70401 main.go:141] libmachine: (kindnet-253147) DBG | domain kindnet-253147 has defined MAC address 52:54:00:d4:5d:9a in network mk-kindnet-253147
	I0205 03:25:27.273314   70401 main.go:141] libmachine: (kindnet-253147) found domain IP: 192.168.50.77
	I0205 03:25:27.273356   70401 main.go:141] libmachine: (kindnet-253147) reserving static IP address...
	I0205 03:25:27.273379   70401 main.go:141] libmachine: (kindnet-253147) DBG | domain kindnet-253147 has current primary IP address 192.168.50.77 and MAC address 52:54:00:d4:5d:9a in network mk-kindnet-253147
	I0205 03:25:27.273787   70401 main.go:141] libmachine: (kindnet-253147) DBG | unable to find host DHCP lease matching {name: "kindnet-253147", mac: "52:54:00:d4:5d:9a", ip: "192.168.50.77"} in network mk-kindnet-253147
	I0205 03:25:27.361958   70401 main.go:141] libmachine: (kindnet-253147) DBG | Getting to WaitForSSH function...
	I0205 03:25:27.362011   70401 main.go:141] libmachine: (kindnet-253147) reserved static IP address 192.168.50.77 for domain kindnet-253147
	I0205 03:25:27.362025   70401 main.go:141] libmachine: (kindnet-253147) waiting for SSH...
	I0205 03:25:27.365797   70401 main.go:141] libmachine: (kindnet-253147) DBG | domain kindnet-253147 has defined MAC address 52:54:00:d4:5d:9a in network mk-kindnet-253147
	I0205 03:25:27.366361   70401 main.go:141] libmachine: (kindnet-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:5d:9a", ip: ""} in network mk-kindnet-253147: {Iface:virbr4 ExpiryTime:2025-02-05 04:25:22 +0000 UTC Type:0 Mac:52:54:00:d4:5d:9a Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d4:5d:9a}
	I0205 03:25:27.366395   70401 main.go:141] libmachine: (kindnet-253147) DBG | domain kindnet-253147 has defined IP address 192.168.50.77 and MAC address 52:54:00:d4:5d:9a in network mk-kindnet-253147
	I0205 03:25:27.366646   70401 main.go:141] libmachine: (kindnet-253147) DBG | Using SSH client type: external
	I0205 03:25:27.366670   70401 main.go:141] libmachine: (kindnet-253147) DBG | Using SSH private key: /home/jenkins/minikube-integration/20363-12788/.minikube/machines/kindnet-253147/id_rsa (-rw-------)
	I0205 03:25:27.366719   70401 main.go:141] libmachine: (kindnet-253147) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.77 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20363-12788/.minikube/machines/kindnet-253147/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0205 03:25:27.366739   70401 main.go:141] libmachine: (kindnet-253147) DBG | About to run SSH command:
	I0205 03:25:27.366764   70401 main.go:141] libmachine: (kindnet-253147) DBG | exit 0
	I0205 03:25:25.338048   68832 pod_ready.go:103] pod "metrics-server-f79f97bbb-k9q9v" in "kube-system" namespace has status "Ready":"False"
	I0205 03:25:27.338223   68832 pod_ready.go:103] pod "metrics-server-f79f97bbb-k9q9v" in "kube-system" namespace has status "Ready":"False"
	I0205 03:25:27.498720   70401 main.go:141] libmachine: (kindnet-253147) DBG | SSH cmd err, output: <nil>: 
	I0205 03:25:27.499006   70401 main.go:141] libmachine: (kindnet-253147) KVM machine creation complete
	I0205 03:25:27.499414   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetConfigRaw
	I0205 03:25:27.542688   70401 main.go:141] libmachine: (kindnet-253147) Calling .DriverName
	I0205 03:25:27.543015   70401 main.go:141] libmachine: (kindnet-253147) Calling .DriverName
	I0205 03:25:27.543179   70401 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0205 03:25:27.543204   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetState
	I0205 03:25:27.544835   70401 main.go:141] libmachine: Detecting operating system of created instance...
	I0205 03:25:27.544853   70401 main.go:141] libmachine: Waiting for SSH to be available...
	I0205 03:25:27.544858   70401 main.go:141] libmachine: Getting to WaitForSSH function...
	I0205 03:25:27.544864   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetSSHHostname
	I0205 03:25:27.547603   70401 main.go:141] libmachine: (kindnet-253147) DBG | domain kindnet-253147 has defined MAC address 52:54:00:d4:5d:9a in network mk-kindnet-253147
	I0205 03:25:27.559415   70401 main.go:141] libmachine: (kindnet-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:5d:9a", ip: ""} in network mk-kindnet-253147: {Iface:virbr4 ExpiryTime:2025-02-05 04:25:22 +0000 UTC Type:0 Mac:52:54:00:d4:5d:9a Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:kindnet-253147 Clientid:01:52:54:00:d4:5d:9a}
	I0205 03:25:27.559451   70401 main.go:141] libmachine: (kindnet-253147) DBG | domain kindnet-253147 has defined IP address 192.168.50.77 and MAC address 52:54:00:d4:5d:9a in network mk-kindnet-253147
	I0205 03:25:27.559658   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetSSHPort
	I0205 03:25:27.560569   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetSSHKeyPath
	I0205 03:25:27.560765   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetSSHKeyPath
	I0205 03:25:27.560904   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetSSHUsername
	I0205 03:25:27.561058   70401 main.go:141] libmachine: Using SSH client type: native
	I0205 03:25:27.561258   70401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.50.77 22 <nil> <nil>}
	I0205 03:25:27.561270   70401 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0205 03:25:27.673117   70401 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0205 03:25:27.673147   70401 main.go:141] libmachine: Detecting the provisioner...
	I0205 03:25:27.673158   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetSSHHostname
	I0205 03:25:27.676389   70401 main.go:141] libmachine: (kindnet-253147) DBG | domain kindnet-253147 has defined MAC address 52:54:00:d4:5d:9a in network mk-kindnet-253147
	I0205 03:25:27.676813   70401 main.go:141] libmachine: (kindnet-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:5d:9a", ip: ""} in network mk-kindnet-253147: {Iface:virbr4 ExpiryTime:2025-02-05 04:25:22 +0000 UTC Type:0 Mac:52:54:00:d4:5d:9a Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:kindnet-253147 Clientid:01:52:54:00:d4:5d:9a}
	I0205 03:25:27.676843   70401 main.go:141] libmachine: (kindnet-253147) DBG | domain kindnet-253147 has defined IP address 192.168.50.77 and MAC address 52:54:00:d4:5d:9a in network mk-kindnet-253147
	I0205 03:25:27.677117   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetSSHPort
	I0205 03:25:27.677294   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetSSHKeyPath
	I0205 03:25:27.677434   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetSSHKeyPath
	I0205 03:25:27.677590   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetSSHUsername
	I0205 03:25:27.677755   70401 main.go:141] libmachine: Using SSH client type: native
	I0205 03:25:27.677951   70401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.50.77 22 <nil> <nil>}
	I0205 03:25:27.677964   70401 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0205 03:25:27.798035   70401 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0205 03:25:27.798122   70401 main.go:141] libmachine: found compatible host: buildroot
	I0205 03:25:27.798136   70401 main.go:141] libmachine: Provisioning with buildroot...
	I0205 03:25:27.798147   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetMachineName
	I0205 03:25:27.798443   70401 buildroot.go:166] provisioning hostname "kindnet-253147"
	I0205 03:25:27.798468   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetMachineName
	I0205 03:25:27.798611   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetSSHHostname
	I0205 03:25:27.801465   70401 main.go:141] libmachine: (kindnet-253147) DBG | domain kindnet-253147 has defined MAC address 52:54:00:d4:5d:9a in network mk-kindnet-253147
	I0205 03:25:27.801856   70401 main.go:141] libmachine: (kindnet-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:5d:9a", ip: ""} in network mk-kindnet-253147: {Iface:virbr4 ExpiryTime:2025-02-05 04:25:22 +0000 UTC Type:0 Mac:52:54:00:d4:5d:9a Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:kindnet-253147 Clientid:01:52:54:00:d4:5d:9a}
	I0205 03:25:27.801886   70401 main.go:141] libmachine: (kindnet-253147) DBG | domain kindnet-253147 has defined IP address 192.168.50.77 and MAC address 52:54:00:d4:5d:9a in network mk-kindnet-253147
	I0205 03:25:27.802051   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetSSHPort
	I0205 03:25:27.802236   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetSSHKeyPath
	I0205 03:25:27.802420   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetSSHKeyPath
	I0205 03:25:27.802615   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetSSHUsername
	I0205 03:25:27.802807   70401 main.go:141] libmachine: Using SSH client type: native
	I0205 03:25:27.803020   70401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.50.77 22 <nil> <nil>}
	I0205 03:25:27.803038   70401 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-253147 && echo "kindnet-253147" | sudo tee /etc/hostname
	I0205 03:25:27.931876   70401 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-253147
	
	I0205 03:25:27.931914   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetSSHHostname
	I0205 03:25:27.935391   70401 main.go:141] libmachine: (kindnet-253147) DBG | domain kindnet-253147 has defined MAC address 52:54:00:d4:5d:9a in network mk-kindnet-253147
	I0205 03:25:27.935866   70401 main.go:141] libmachine: (kindnet-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:5d:9a", ip: ""} in network mk-kindnet-253147: {Iface:virbr4 ExpiryTime:2025-02-05 04:25:22 +0000 UTC Type:0 Mac:52:54:00:d4:5d:9a Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:kindnet-253147 Clientid:01:52:54:00:d4:5d:9a}
	I0205 03:25:27.935895   70401 main.go:141] libmachine: (kindnet-253147) DBG | domain kindnet-253147 has defined IP address 192.168.50.77 and MAC address 52:54:00:d4:5d:9a in network mk-kindnet-253147
	I0205 03:25:27.936095   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetSSHPort
	I0205 03:25:27.936296   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetSSHKeyPath
	I0205 03:25:27.936504   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetSSHKeyPath
	I0205 03:25:27.936679   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetSSHUsername
	I0205 03:25:27.936839   70401 main.go:141] libmachine: Using SSH client type: native
	I0205 03:25:27.937080   70401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.50.77 22 <nil> <nil>}
	I0205 03:25:27.937111   70401 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-253147' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-253147/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-253147' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0205 03:25:28.058763   70401 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0205 03:25:28.058789   70401 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20363-12788/.minikube CaCertPath:/home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20363-12788/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20363-12788/.minikube}
	I0205 03:25:28.058805   70401 buildroot.go:174] setting up certificates
	I0205 03:25:28.058827   70401 provision.go:84] configureAuth start
	I0205 03:25:28.058839   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetMachineName
	I0205 03:25:28.059175   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetIP
	I0205 03:25:28.062347   70401 main.go:141] libmachine: (kindnet-253147) DBG | domain kindnet-253147 has defined MAC address 52:54:00:d4:5d:9a in network mk-kindnet-253147
	I0205 03:25:28.062823   70401 main.go:141] libmachine: (kindnet-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:5d:9a", ip: ""} in network mk-kindnet-253147: {Iface:virbr4 ExpiryTime:2025-02-05 04:25:22 +0000 UTC Type:0 Mac:52:54:00:d4:5d:9a Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:kindnet-253147 Clientid:01:52:54:00:d4:5d:9a}
	I0205 03:25:28.062853   70401 main.go:141] libmachine: (kindnet-253147) DBG | domain kindnet-253147 has defined IP address 192.168.50.77 and MAC address 52:54:00:d4:5d:9a in network mk-kindnet-253147
	I0205 03:25:28.063035   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetSSHHostname
	I0205 03:25:28.065393   70401 main.go:141] libmachine: (kindnet-253147) DBG | domain kindnet-253147 has defined MAC address 52:54:00:d4:5d:9a in network mk-kindnet-253147
	I0205 03:25:28.065740   70401 main.go:141] libmachine: (kindnet-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:5d:9a", ip: ""} in network mk-kindnet-253147: {Iface:virbr4 ExpiryTime:2025-02-05 04:25:22 +0000 UTC Type:0 Mac:52:54:00:d4:5d:9a Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:kindnet-253147 Clientid:01:52:54:00:d4:5d:9a}
	I0205 03:25:28.065776   70401 main.go:141] libmachine: (kindnet-253147) DBG | domain kindnet-253147 has defined IP address 192.168.50.77 and MAC address 52:54:00:d4:5d:9a in network mk-kindnet-253147
	I0205 03:25:28.065924   70401 provision.go:143] copyHostCerts
	I0205 03:25:28.065987   70401 exec_runner.go:144] found /home/jenkins/minikube-integration/20363-12788/.minikube/ca.pem, removing ...
	I0205 03:25:28.065997   70401 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20363-12788/.minikube/ca.pem
	I0205 03:25:28.066060   70401 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20363-12788/.minikube/ca.pem (1082 bytes)
	I0205 03:25:28.066157   70401 exec_runner.go:144] found /home/jenkins/minikube-integration/20363-12788/.minikube/cert.pem, removing ...
	I0205 03:25:28.066166   70401 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20363-12788/.minikube/cert.pem
	I0205 03:25:28.066198   70401 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20363-12788/.minikube/cert.pem (1123 bytes)
	I0205 03:25:28.066266   70401 exec_runner.go:144] found /home/jenkins/minikube-integration/20363-12788/.minikube/key.pem, removing ...
	I0205 03:25:28.066275   70401 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20363-12788/.minikube/key.pem
	I0205 03:25:28.066296   70401 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20363-12788/.minikube/key.pem (1675 bytes)
	I0205 03:25:28.066353   70401 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20363-12788/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca-key.pem org=jenkins.kindnet-253147 san=[127.0.0.1 192.168.50.77 kindnet-253147 localhost minikube]
	I0205 03:25:28.336255   70401 provision.go:177] copyRemoteCerts
	I0205 03:25:28.336325   70401 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0205 03:25:28.336372   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetSSHHostname
	I0205 03:25:28.339453   70401 main.go:141] libmachine: (kindnet-253147) DBG | domain kindnet-253147 has defined MAC address 52:54:00:d4:5d:9a in network mk-kindnet-253147
	I0205 03:25:28.339840   70401 main.go:141] libmachine: (kindnet-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:5d:9a", ip: ""} in network mk-kindnet-253147: {Iface:virbr4 ExpiryTime:2025-02-05 04:25:22 +0000 UTC Type:0 Mac:52:54:00:d4:5d:9a Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:kindnet-253147 Clientid:01:52:54:00:d4:5d:9a}
	I0205 03:25:28.339870   70401 main.go:141] libmachine: (kindnet-253147) DBG | domain kindnet-253147 has defined IP address 192.168.50.77 and MAC address 52:54:00:d4:5d:9a in network mk-kindnet-253147
	I0205 03:25:28.340081   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetSSHPort
	I0205 03:25:28.340252   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetSSHKeyPath
	I0205 03:25:28.340422   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetSSHUsername
	I0205 03:25:28.340551   70401 sshutil.go:53] new ssh client: &{IP:192.168.50.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/kindnet-253147/id_rsa Username:docker}
	I0205 03:25:28.429419   70401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0205 03:25:28.455189   70401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0205 03:25:28.477621   70401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0205 03:25:28.502165   70401 provision.go:87] duration metric: took 443.323518ms to configureAuth
	I0205 03:25:28.502194   70401 buildroot.go:189] setting minikube options for container-runtime
	I0205 03:25:28.502395   70401 config.go:182] Loaded profile config "kindnet-253147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 03:25:28.502482   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetSSHHostname
	I0205 03:25:28.504843   70401 main.go:141] libmachine: (kindnet-253147) DBG | domain kindnet-253147 has defined MAC address 52:54:00:d4:5d:9a in network mk-kindnet-253147
	I0205 03:25:28.505310   70401 main.go:141] libmachine: (kindnet-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:5d:9a", ip: ""} in network mk-kindnet-253147: {Iface:virbr4 ExpiryTime:2025-02-05 04:25:22 +0000 UTC Type:0 Mac:52:54:00:d4:5d:9a Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:kindnet-253147 Clientid:01:52:54:00:d4:5d:9a}
	I0205 03:25:28.505383   70401 main.go:141] libmachine: (kindnet-253147) DBG | domain kindnet-253147 has defined IP address 192.168.50.77 and MAC address 52:54:00:d4:5d:9a in network mk-kindnet-253147
	I0205 03:25:28.505498   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetSSHPort
	I0205 03:25:28.505694   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetSSHKeyPath
	I0205 03:25:28.505834   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetSSHKeyPath
	I0205 03:25:28.505989   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetSSHUsername
	I0205 03:25:28.506169   70401 main.go:141] libmachine: Using SSH client type: native
	I0205 03:25:28.506354   70401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.50.77 22 <nil> <nil>}
	I0205 03:25:28.506376   70401 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0205 03:25:28.751766   70401 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0205 03:25:28.751797   70401 main.go:141] libmachine: Checking connection to Docker...
	I0205 03:25:28.751808   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetURL
	I0205 03:25:28.753235   70401 main.go:141] libmachine: (kindnet-253147) DBG | using libvirt version 6000000
	I0205 03:25:28.755763   70401 main.go:141] libmachine: (kindnet-253147) DBG | domain kindnet-253147 has defined MAC address 52:54:00:d4:5d:9a in network mk-kindnet-253147
	I0205 03:25:28.756142   70401 main.go:141] libmachine: (kindnet-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:5d:9a", ip: ""} in network mk-kindnet-253147: {Iface:virbr4 ExpiryTime:2025-02-05 04:25:22 +0000 UTC Type:0 Mac:52:54:00:d4:5d:9a Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:kindnet-253147 Clientid:01:52:54:00:d4:5d:9a}
	I0205 03:25:28.756168   70401 main.go:141] libmachine: (kindnet-253147) DBG | domain kindnet-253147 has defined IP address 192.168.50.77 and MAC address 52:54:00:d4:5d:9a in network mk-kindnet-253147
	I0205 03:25:28.756364   70401 main.go:141] libmachine: Docker is up and running!
	I0205 03:25:28.756381   70401 main.go:141] libmachine: Reticulating splines...
	I0205 03:25:28.756490   70401 client.go:171] duration metric: took 21.162175427s to LocalClient.Create
	I0205 03:25:28.756533   70401 start.go:167] duration metric: took 21.162263772s to libmachine.API.Create "kindnet-253147"
	I0205 03:25:28.756548   70401 start.go:293] postStartSetup for "kindnet-253147" (driver="kvm2")
	I0205 03:25:28.756565   70401 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0205 03:25:28.756597   70401 main.go:141] libmachine: (kindnet-253147) Calling .DriverName
	I0205 03:25:28.756846   70401 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0205 03:25:28.756871   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetSSHHostname
	I0205 03:25:28.758972   70401 main.go:141] libmachine: (kindnet-253147) DBG | domain kindnet-253147 has defined MAC address 52:54:00:d4:5d:9a in network mk-kindnet-253147
	I0205 03:25:28.759291   70401 main.go:141] libmachine: (kindnet-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:5d:9a", ip: ""} in network mk-kindnet-253147: {Iface:virbr4 ExpiryTime:2025-02-05 04:25:22 +0000 UTC Type:0 Mac:52:54:00:d4:5d:9a Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:kindnet-253147 Clientid:01:52:54:00:d4:5d:9a}
	I0205 03:25:28.759318   70401 main.go:141] libmachine: (kindnet-253147) DBG | domain kindnet-253147 has defined IP address 192.168.50.77 and MAC address 52:54:00:d4:5d:9a in network mk-kindnet-253147
	I0205 03:25:28.759520   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetSSHPort
	I0205 03:25:28.759706   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetSSHKeyPath
	I0205 03:25:28.759853   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetSSHUsername
	I0205 03:25:28.760008   70401 sshutil.go:53] new ssh client: &{IP:192.168.50.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/kindnet-253147/id_rsa Username:docker}
	I0205 03:25:28.844100   70401 ssh_runner.go:195] Run: cat /etc/os-release
	I0205 03:25:28.848563   70401 info.go:137] Remote host: Buildroot 2023.02.9
	I0205 03:25:28.848593   70401 filesync.go:126] Scanning /home/jenkins/minikube-integration/20363-12788/.minikube/addons for local assets ...
	I0205 03:25:28.848674   70401 filesync.go:126] Scanning /home/jenkins/minikube-integration/20363-12788/.minikube/files for local assets ...
	I0205 03:25:28.848780   70401 filesync.go:149] local asset: /home/jenkins/minikube-integration/20363-12788/.minikube/files/etc/ssl/certs/199892.pem -> 199892.pem in /etc/ssl/certs
	I0205 03:25:28.848917   70401 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0205 03:25:28.858633   70401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/files/etc/ssl/certs/199892.pem --> /etc/ssl/certs/199892.pem (1708 bytes)
	I0205 03:25:28.887322   70401 start.go:296] duration metric: took 130.756362ms for postStartSetup
	I0205 03:25:28.887409   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetConfigRaw
	I0205 03:25:28.888009   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetIP
	I0205 03:25:28.890959   70401 main.go:141] libmachine: (kindnet-253147) DBG | domain kindnet-253147 has defined MAC address 52:54:00:d4:5d:9a in network mk-kindnet-253147
	I0205 03:25:28.891248   70401 main.go:141] libmachine: (kindnet-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:5d:9a", ip: ""} in network mk-kindnet-253147: {Iface:virbr4 ExpiryTime:2025-02-05 04:25:22 +0000 UTC Type:0 Mac:52:54:00:d4:5d:9a Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:kindnet-253147 Clientid:01:52:54:00:d4:5d:9a}
	I0205 03:25:28.891280   70401 main.go:141] libmachine: (kindnet-253147) DBG | domain kindnet-253147 has defined IP address 192.168.50.77 and MAC address 52:54:00:d4:5d:9a in network mk-kindnet-253147
	I0205 03:25:28.891545   70401 profile.go:143] Saving config to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kindnet-253147/config.json ...
	I0205 03:25:28.891801   70401 start.go:128] duration metric: took 21.317250012s to createHost
	I0205 03:25:28.891831   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetSSHHostname
	I0205 03:25:28.894203   70401 main.go:141] libmachine: (kindnet-253147) DBG | domain kindnet-253147 has defined MAC address 52:54:00:d4:5d:9a in network mk-kindnet-253147
	I0205 03:25:28.894553   70401 main.go:141] libmachine: (kindnet-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:5d:9a", ip: ""} in network mk-kindnet-253147: {Iface:virbr4 ExpiryTime:2025-02-05 04:25:22 +0000 UTC Type:0 Mac:52:54:00:d4:5d:9a Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:kindnet-253147 Clientid:01:52:54:00:d4:5d:9a}
	I0205 03:25:28.894581   70401 main.go:141] libmachine: (kindnet-253147) DBG | domain kindnet-253147 has defined IP address 192.168.50.77 and MAC address 52:54:00:d4:5d:9a in network mk-kindnet-253147
	I0205 03:25:28.894697   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetSSHPort
	I0205 03:25:28.894839   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetSSHKeyPath
	I0205 03:25:28.895041   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetSSHKeyPath
	I0205 03:25:28.895219   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetSSHUsername
	I0205 03:25:28.895385   70401 main.go:141] libmachine: Using SSH client type: native
	I0205 03:25:28.895539   70401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.50.77 22 <nil> <nil>}
	I0205 03:25:28.895553   70401 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0205 03:25:29.015034   70401 main.go:141] libmachine: SSH cmd err, output: <nil>: 1738725928.973615554
	
	I0205 03:25:29.015059   70401 fix.go:216] guest clock: 1738725928.973615554
	I0205 03:25:29.015069   70401 fix.go:229] Guest: 2025-02-05 03:25:28.973615554 +0000 UTC Remote: 2025-02-05 03:25:28.891817863 +0000 UTC m=+21.438424417 (delta=81.797691ms)
	I0205 03:25:29.015104   70401 fix.go:200] guest clock delta is within tolerance: 81.797691ms
	I0205 03:25:29.015111   70401 start.go:83] releasing machines lock for "kindnet-253147", held for 21.440664338s
	I0205 03:25:29.015137   70401 main.go:141] libmachine: (kindnet-253147) Calling .DriverName
	I0205 03:25:29.015380   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetIP
	I0205 03:25:29.017852   70401 main.go:141] libmachine: (kindnet-253147) DBG | domain kindnet-253147 has defined MAC address 52:54:00:d4:5d:9a in network mk-kindnet-253147
	I0205 03:25:29.018213   70401 main.go:141] libmachine: (kindnet-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:5d:9a", ip: ""} in network mk-kindnet-253147: {Iface:virbr4 ExpiryTime:2025-02-05 04:25:22 +0000 UTC Type:0 Mac:52:54:00:d4:5d:9a Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:kindnet-253147 Clientid:01:52:54:00:d4:5d:9a}
	I0205 03:25:29.018244   70401 main.go:141] libmachine: (kindnet-253147) DBG | domain kindnet-253147 has defined IP address 192.168.50.77 and MAC address 52:54:00:d4:5d:9a in network mk-kindnet-253147
	I0205 03:25:29.018412   70401 main.go:141] libmachine: (kindnet-253147) Calling .DriverName
	I0205 03:25:29.018951   70401 main.go:141] libmachine: (kindnet-253147) Calling .DriverName
	I0205 03:25:29.019148   70401 main.go:141] libmachine: (kindnet-253147) Calling .DriverName
	I0205 03:25:29.019236   70401 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0205 03:25:29.019281   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetSSHHostname
	I0205 03:25:29.019339   70401 ssh_runner.go:195] Run: cat /version.json
	I0205 03:25:29.019359   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetSSHHostname
	I0205 03:25:29.021905   70401 main.go:141] libmachine: (kindnet-253147) DBG | domain kindnet-253147 has defined MAC address 52:54:00:d4:5d:9a in network mk-kindnet-253147
	I0205 03:25:29.022201   70401 main.go:141] libmachine: (kindnet-253147) DBG | domain kindnet-253147 has defined MAC address 52:54:00:d4:5d:9a in network mk-kindnet-253147
	I0205 03:25:29.022267   70401 main.go:141] libmachine: (kindnet-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:5d:9a", ip: ""} in network mk-kindnet-253147: {Iface:virbr4 ExpiryTime:2025-02-05 04:25:22 +0000 UTC Type:0 Mac:52:54:00:d4:5d:9a Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:kindnet-253147 Clientid:01:52:54:00:d4:5d:9a}
	I0205 03:25:29.022291   70401 main.go:141] libmachine: (kindnet-253147) DBG | domain kindnet-253147 has defined IP address 192.168.50.77 and MAC address 52:54:00:d4:5d:9a in network mk-kindnet-253147
	I0205 03:25:29.022431   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetSSHPort
	I0205 03:25:29.022567   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetSSHKeyPath
	I0205 03:25:29.022713   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetSSHUsername
	I0205 03:25:29.022750   70401 main.go:141] libmachine: (kindnet-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:5d:9a", ip: ""} in network mk-kindnet-253147: {Iface:virbr4 ExpiryTime:2025-02-05 04:25:22 +0000 UTC Type:0 Mac:52:54:00:d4:5d:9a Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:kindnet-253147 Clientid:01:52:54:00:d4:5d:9a}
	I0205 03:25:29.022841   70401 main.go:141] libmachine: (kindnet-253147) DBG | domain kindnet-253147 has defined IP address 192.168.50.77 and MAC address 52:54:00:d4:5d:9a in network mk-kindnet-253147
	I0205 03:25:29.022863   70401 sshutil.go:53] new ssh client: &{IP:192.168.50.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/kindnet-253147/id_rsa Username:docker}
	I0205 03:25:29.022923   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetSSHPort
	I0205 03:25:29.023072   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetSSHKeyPath
	I0205 03:25:29.023208   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetSSHUsername
	I0205 03:25:29.023371   70401 sshutil.go:53] new ssh client: &{IP:192.168.50.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/kindnet-253147/id_rsa Username:docker}
	I0205 03:25:29.102691   70401 ssh_runner.go:195] Run: systemctl --version
	I0205 03:25:29.127205   70401 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0205 03:25:29.287827   70401 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0205 03:25:29.293930   70401 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0205 03:25:29.294010   70401 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0205 03:25:29.311235   70401 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0205 03:25:29.311266   70401 start.go:495] detecting cgroup driver to use...
	I0205 03:25:29.311335   70401 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0205 03:25:29.328350   70401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0205 03:25:29.343546   70401 docker.go:217] disabling cri-docker service (if available) ...
	I0205 03:25:29.343613   70401 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0205 03:25:29.361229   70401 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0205 03:25:29.375835   70401 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0205 03:25:29.498290   70401 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0205 03:25:29.657887   70401 docker.go:233] disabling docker service ...
	I0205 03:25:29.657970   70401 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0205 03:25:29.677253   70401 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0205 03:25:29.693102   70401 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0205 03:25:29.848847   70401 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0205 03:25:29.993880   70401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0205 03:25:30.007932   70401 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0205 03:25:30.026854   70401 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0205 03:25:30.026919   70401 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:25:30.037598   70401 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0205 03:25:30.037659   70401 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:25:30.048321   70401 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:25:30.059138   70401 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:25:30.071407   70401 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0205 03:25:30.083537   70401 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:25:30.095551   70401 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:25:30.112763   70401 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:25:30.123195   70401 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0205 03:25:30.133167   70401 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0205 03:25:30.133249   70401 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0205 03:25:30.146819   70401 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0205 03:25:30.156801   70401 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0205 03:25:30.295795   70401 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0205 03:25:30.395836   70401 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0205 03:25:30.395893   70401 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0205 03:25:30.400701   70401 start.go:563] Will wait 60s for crictl version
	I0205 03:25:30.400774   70401 ssh_runner.go:195] Run: which crictl
	I0205 03:25:30.404467   70401 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0205 03:25:30.445528   70401 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0205 03:25:30.445615   70401 ssh_runner.go:195] Run: crio --version
	I0205 03:25:30.472931   70401 ssh_runner.go:195] Run: crio --version
	I0205 03:25:30.503935   70401 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0205 03:25:30.505103   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetIP
	I0205 03:25:30.508448   70401 main.go:141] libmachine: (kindnet-253147) DBG | domain kindnet-253147 has defined MAC address 52:54:00:d4:5d:9a in network mk-kindnet-253147
	I0205 03:25:30.508844   70401 main.go:141] libmachine: (kindnet-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:5d:9a", ip: ""} in network mk-kindnet-253147: {Iface:virbr4 ExpiryTime:2025-02-05 04:25:22 +0000 UTC Type:0 Mac:52:54:00:d4:5d:9a Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:kindnet-253147 Clientid:01:52:54:00:d4:5d:9a}
	I0205 03:25:30.508878   70401 main.go:141] libmachine: (kindnet-253147) DBG | domain kindnet-253147 has defined IP address 192.168.50.77 and MAC address 52:54:00:d4:5d:9a in network mk-kindnet-253147
	I0205 03:25:30.509109   70401 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0205 03:25:30.513711   70401 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0205 03:25:30.526301   70401 kubeadm.go:883] updating cluster {Name:kindnet-253147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:kindnet-253147 Namespace:default APISer
verHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.50.77 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0205 03:25:30.526405   70401 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0205 03:25:30.526456   70401 ssh_runner.go:195] Run: sudo crictl images --output json
	I0205 03:25:30.557581   70401 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0205 03:25:30.557654   70401 ssh_runner.go:195] Run: which lz4
	I0205 03:25:30.561395   70401 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0205 03:25:30.565460   70401 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0205 03:25:30.565500   70401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0205 03:25:31.877769   70401 crio.go:462] duration metric: took 1.31644261s to copy over tarball
	I0205 03:25:31.877842   70401 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0205 03:25:29.338652   68832 pod_ready.go:103] pod "metrics-server-f79f97bbb-k9q9v" in "kube-system" namespace has status "Ready":"False"
	I0205 03:25:31.339300   68832 pod_ready.go:103] pod "metrics-server-f79f97bbb-k9q9v" in "kube-system" namespace has status "Ready":"False"
	I0205 03:25:30.187609   64850 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0205 03:25:30.187908   64850 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0205 03:25:34.115681   70401 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.23780878s)
	I0205 03:25:34.115721   70401 crio.go:469] duration metric: took 2.237923025s to extract the tarball
	I0205 03:25:34.115731   70401 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0205 03:25:34.152531   70401 ssh_runner.go:195] Run: sudo crictl images --output json
	I0205 03:25:34.193456   70401 crio.go:514] all images are preloaded for cri-o runtime.
	I0205 03:25:34.193481   70401 cache_images.go:84] Images are preloaded, skipping loading
	I0205 03:25:34.193488   70401 kubeadm.go:934] updating node { 192.168.50.77 8443 v1.32.1 crio true true} ...
	I0205 03:25:34.193578   70401 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-253147 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.77
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:kindnet-253147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I0205 03:25:34.193638   70401 ssh_runner.go:195] Run: crio config
	I0205 03:25:34.243315   70401 cni.go:84] Creating CNI manager for "kindnet"
	I0205 03:25:34.243343   70401 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0205 03:25:34.243367   70401 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.77 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-253147 NodeName:kindnet-253147 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.77"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.77 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0205 03:25:34.243595   70401 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.77
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-253147"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.77"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.77"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0205 03:25:34.243666   70401 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0205 03:25:34.253539   70401 binaries.go:44] Found k8s binaries, skipping transfer
	I0205 03:25:34.253609   70401 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0205 03:25:34.263768   70401 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0205 03:25:34.280297   70401 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0205 03:25:34.297169   70401 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2291 bytes)
	I0205 03:25:34.314197   70401 ssh_runner.go:195] Run: grep 192.168.50.77	control-plane.minikube.internal$ /etc/hosts
	I0205 03:25:34.318044   70401 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.77	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0205 03:25:34.329858   70401 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0205 03:25:34.445309   70401 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0205 03:25:34.462284   70401 certs.go:68] Setting up /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kindnet-253147 for IP: 192.168.50.77
	I0205 03:25:34.462310   70401 certs.go:194] generating shared ca certs ...
	I0205 03:25:34.462328   70401 certs.go:226] acquiring lock for ca certs: {Name:mkca3cbb562bd1b517883c3b07fce8acc6cc9038 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:25:34.462488   70401 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20363-12788/.minikube/ca.key
	I0205 03:25:34.462543   70401 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20363-12788/.minikube/proxy-client-ca.key
	I0205 03:25:34.462558   70401 certs.go:256] generating profile certs ...
	I0205 03:25:34.462619   70401 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kindnet-253147/client.key
	I0205 03:25:34.462645   70401 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kindnet-253147/client.crt with IP's: []
	I0205 03:25:34.637975   70401 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kindnet-253147/client.crt ...
	I0205 03:25:34.638006   70401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kindnet-253147/client.crt: {Name:mk338ad8beb2f860d7a642f64bbb56a572f0beb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:25:34.638217   70401 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kindnet-253147/client.key ...
	I0205 03:25:34.638238   70401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kindnet-253147/client.key: {Name:mk626aa368d9c9f33293b2e06eb9c9e02e010dae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:25:34.638368   70401 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kindnet-253147/apiserver.key.1da9eda6
	I0205 03:25:34.638389   70401 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kindnet-253147/apiserver.crt.1da9eda6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.77]
	I0205 03:25:34.891918   70401 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kindnet-253147/apiserver.crt.1da9eda6 ...
	I0205 03:25:34.891951   70401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kindnet-253147/apiserver.crt.1da9eda6: {Name:mkfad7fd57fa80e4cf99b75dc5e2b1800b9cbc52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:25:34.892146   70401 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kindnet-253147/apiserver.key.1da9eda6 ...
	I0205 03:25:34.892165   70401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kindnet-253147/apiserver.key.1da9eda6: {Name:mk8550da5fdcc9cac2b0e8c9ab477b6cf9eb79b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:25:34.892291   70401 certs.go:381] copying /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kindnet-253147/apiserver.crt.1da9eda6 -> /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kindnet-253147/apiserver.crt
	I0205 03:25:34.892406   70401 certs.go:385] copying /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kindnet-253147/apiserver.key.1da9eda6 -> /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kindnet-253147/apiserver.key
	I0205 03:25:34.892495   70401 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kindnet-253147/proxy-client.key
	I0205 03:25:34.892518   70401 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kindnet-253147/proxy-client.crt with IP's: []
	I0205 03:25:35.153934   70401 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kindnet-253147/proxy-client.crt ...
	I0205 03:25:35.153961   70401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kindnet-253147/proxy-client.crt: {Name:mkac663949ac91ab832adeaae41ffc956507a410 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:25:35.154120   70401 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kindnet-253147/proxy-client.key ...
	I0205 03:25:35.154133   70401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kindnet-253147/proxy-client.key: {Name:mk5e72e3a8419babf6400d4dad5669db1c42889a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:25:35.154305   70401 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/19989.pem (1338 bytes)
	W0205 03:25:35.154340   70401 certs.go:480] ignoring /home/jenkins/minikube-integration/20363-12788/.minikube/certs/19989_empty.pem, impossibly tiny 0 bytes
	I0205 03:25:35.154350   70401 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca-key.pem (1679 bytes)
	I0205 03:25:35.154371   70401 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem (1082 bytes)
	I0205 03:25:35.154393   70401 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/cert.pem (1123 bytes)
	I0205 03:25:35.154414   70401 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/key.pem (1675 bytes)
	I0205 03:25:35.154450   70401 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/files/etc/ssl/certs/199892.pem (1708 bytes)
	I0205 03:25:35.154947   70401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0205 03:25:35.180760   70401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0205 03:25:35.207837   70401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0205 03:25:35.232958   70401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0205 03:25:35.254558   70401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kindnet-253147/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0205 03:25:35.275550   70401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kindnet-253147/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0205 03:25:35.296440   70401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kindnet-253147/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0205 03:25:35.317562   70401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kindnet-253147/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0205 03:25:35.338569   70401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/certs/19989.pem --> /usr/share/ca-certificates/19989.pem (1338 bytes)
	I0205 03:25:35.362735   70401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/files/etc/ssl/certs/199892.pem --> /usr/share/ca-certificates/199892.pem (1708 bytes)
	I0205 03:25:35.387032   70401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0205 03:25:35.411605   70401 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0205 03:25:35.427083   70401 ssh_runner.go:195] Run: openssl version
	I0205 03:25:35.433215   70401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19989.pem && ln -fs /usr/share/ca-certificates/19989.pem /etc/ssl/certs/19989.pem"
	I0205 03:25:35.443277   70401 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19989.pem
	I0205 03:25:35.447760   70401 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb  5 02:11 /usr/share/ca-certificates/19989.pem
	I0205 03:25:35.447809   70401 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19989.pem
	I0205 03:25:35.453540   70401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/19989.pem /etc/ssl/certs/51391683.0"
	I0205 03:25:35.465191   70401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/199892.pem && ln -fs /usr/share/ca-certificates/199892.pem /etc/ssl/certs/199892.pem"
	I0205 03:25:35.476756   70401 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/199892.pem
	I0205 03:25:35.481042   70401 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb  5 02:11 /usr/share/ca-certificates/199892.pem
	I0205 03:25:35.481088   70401 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/199892.pem
	I0205 03:25:35.486568   70401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/199892.pem /etc/ssl/certs/3ec20f2e.0"
	I0205 03:25:35.497827   70401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0205 03:25:35.507820   70401 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0205 03:25:35.511764   70401 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb  5 02:04 /usr/share/ca-certificates/minikubeCA.pem
	I0205 03:25:35.511816   70401 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0205 03:25:35.517365   70401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0205 03:25:35.528801   70401 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0205 03:25:35.532649   70401 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0205 03:25:35.532698   70401 kubeadm.go:392] StartCluster: {Name:kindnet-253147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:kindnet-253147 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.50.77 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0205 03:25:35.532775   70401 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0205 03:25:35.532812   70401 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0205 03:25:35.574320   70401 cri.go:89] found id: ""
	I0205 03:25:35.574397   70401 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0205 03:25:35.583914   70401 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0205 03:25:35.593280   70401 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0205 03:25:35.603329   70401 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0205 03:25:35.603346   70401 kubeadm.go:157] found existing configuration files:
	
	I0205 03:25:35.603439   70401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0205 03:25:35.611697   70401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0205 03:25:35.611748   70401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0205 03:25:35.620618   70401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0205 03:25:35.628993   70401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0205 03:25:35.629036   70401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0205 03:25:35.637778   70401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0205 03:25:35.646137   70401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0205 03:25:35.646189   70401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0205 03:25:35.654869   70401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0205 03:25:35.663555   70401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0205 03:25:35.663613   70401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0205 03:25:35.672287   70401 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0205 03:25:35.821132   70401 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0205 03:25:33.841597   68832 pod_ready.go:103] pod "metrics-server-f79f97bbb-k9q9v" in "kube-system" namespace has status "Ready":"False"
	I0205 03:25:36.337202   68832 pod_ready.go:103] pod "metrics-server-f79f97bbb-k9q9v" in "kube-system" namespace has status "Ready":"False"
	I0205 03:25:38.344386   68832 pod_ready.go:103] pod "metrics-server-f79f97bbb-k9q9v" in "kube-system" namespace has status "Ready":"False"
	I0205 03:25:40.839328   68832 pod_ready.go:103] pod "metrics-server-f79f97bbb-k9q9v" in "kube-system" namespace has status "Ready":"False"
	I0205 03:25:43.338525   68832 pod_ready.go:103] pod "metrics-server-f79f97bbb-k9q9v" in "kube-system" namespace has status "Ready":"False"
	I0205 03:25:45.076269   70401 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0205 03:25:45.076335   70401 kubeadm.go:310] [preflight] Running pre-flight checks
	I0205 03:25:45.076444   70401 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0205 03:25:45.076602   70401 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0205 03:25:45.076737   70401 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0205 03:25:45.076825   70401 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0205 03:25:45.078393   70401 out.go:235]   - Generating certificates and keys ...
	I0205 03:25:45.078477   70401 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0205 03:25:45.078581   70401 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0205 03:25:45.078680   70401 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0205 03:25:45.078761   70401 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0205 03:25:45.078843   70401 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0205 03:25:45.078917   70401 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0205 03:25:45.078987   70401 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0205 03:25:45.079221   70401 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kindnet-253147 localhost] and IPs [192.168.50.77 127.0.0.1 ::1]
	I0205 03:25:45.079294   70401 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0205 03:25:45.079421   70401 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kindnet-253147 localhost] and IPs [192.168.50.77 127.0.0.1 ::1]
	I0205 03:25:45.079495   70401 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0205 03:25:45.079582   70401 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0205 03:25:45.079646   70401 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0205 03:25:45.079724   70401 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0205 03:25:45.079791   70401 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0205 03:25:45.079886   70401 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0205 03:25:45.079945   70401 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0205 03:25:45.079995   70401 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0205 03:25:45.080039   70401 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0205 03:25:45.080106   70401 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0205 03:25:45.080204   70401 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0205 03:25:45.081735   70401 out.go:235]   - Booting up control plane ...
	I0205 03:25:45.081819   70401 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0205 03:25:45.081902   70401 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0205 03:25:45.081981   70401 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0205 03:25:45.082127   70401 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0205 03:25:45.082259   70401 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0205 03:25:45.082317   70401 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0205 03:25:45.082470   70401 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0205 03:25:45.082555   70401 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0205 03:25:45.082616   70401 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 508.201694ms
	I0205 03:25:45.082702   70401 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0205 03:25:45.082800   70401 kubeadm.go:310] [api-check] The API server is healthy after 5.00216908s
	I0205 03:25:45.082966   70401 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0205 03:25:45.083070   70401 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0205 03:25:45.083134   70401 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0205 03:25:45.083365   70401 kubeadm.go:310] [mark-control-plane] Marking the node kindnet-253147 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0205 03:25:45.083468   70401 kubeadm.go:310] [bootstrap-token] Using token: 09j9zq.frcn32w820h6pikr
	I0205 03:25:45.084754   70401 out.go:235]   - Configuring RBAC rules ...
	I0205 03:25:45.084844   70401 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0205 03:25:45.084917   70401 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0205 03:25:45.085046   70401 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0205 03:25:45.085158   70401 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0205 03:25:45.085274   70401 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0205 03:25:45.085367   70401 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0205 03:25:45.085471   70401 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0205 03:25:45.085510   70401 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0205 03:25:45.085549   70401 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0205 03:25:45.085555   70401 kubeadm.go:310] 
	I0205 03:25:45.085612   70401 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0205 03:25:45.085621   70401 kubeadm.go:310] 
	I0205 03:25:45.085701   70401 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0205 03:25:45.085710   70401 kubeadm.go:310] 
	I0205 03:25:45.085745   70401 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0205 03:25:45.085814   70401 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0205 03:25:45.085887   70401 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0205 03:25:45.085897   70401 kubeadm.go:310] 
	I0205 03:25:45.085941   70401 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0205 03:25:45.085946   70401 kubeadm.go:310] 
	I0205 03:25:45.085983   70401 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0205 03:25:45.085989   70401 kubeadm.go:310] 
	I0205 03:25:45.086028   70401 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0205 03:25:45.086085   70401 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0205 03:25:45.086143   70401 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0205 03:25:45.086149   70401 kubeadm.go:310] 
	I0205 03:25:45.086219   70401 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0205 03:25:45.086287   70401 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0205 03:25:45.086296   70401 kubeadm.go:310] 
	I0205 03:25:45.086363   70401 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 09j9zq.frcn32w820h6pikr \
	I0205 03:25:45.086443   70401 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d4171302552bdc887375c8f8e3004ed013421bed3bb373f5cbe18dbecc68169 \
	I0205 03:25:45.086468   70401 kubeadm.go:310] 	--control-plane 
	I0205 03:25:45.086475   70401 kubeadm.go:310] 
	I0205 03:25:45.086547   70401 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0205 03:25:45.086553   70401 kubeadm.go:310] 
	I0205 03:25:45.086621   70401 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 09j9zq.frcn32w820h6pikr \
	I0205 03:25:45.086725   70401 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d4171302552bdc887375c8f8e3004ed013421bed3bb373f5cbe18dbecc68169 
	I0205 03:25:45.086737   70401 cni.go:84] Creating CNI manager for "kindnet"
	I0205 03:25:45.088944   70401 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0205 03:25:45.090123   70401 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0205 03:25:45.095411   70401 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.1/kubectl ...
	I0205 03:25:45.095429   70401 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0205 03:25:45.114033   70401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0205 03:25:45.406861   70401 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0205 03:25:45.406963   70401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 03:25:45.407002   70401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-253147 minikube.k8s.io/updated_at=2025_02_05T03_25_45_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=e14c106b6cd223b80b8f7425af26284a33f75c3d minikube.k8s.io/name=kindnet-253147 minikube.k8s.io/primary=true
	I0205 03:25:45.439952   70401 ops.go:34] apiserver oom_adj: -16
	I0205 03:25:45.625574   70401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 03:25:46.125709   70401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 03:25:46.626600   70401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 03:25:47.126596   70401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 03:25:45.339019   68832 pod_ready.go:103] pod "metrics-server-f79f97bbb-k9q9v" in "kube-system" namespace has status "Ready":"False"
	I0205 03:25:47.839558   68832 pod_ready.go:103] pod "metrics-server-f79f97bbb-k9q9v" in "kube-system" namespace has status "Ready":"False"
	I0205 03:25:47.626546   70401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 03:25:48.126439   70401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 03:25:48.626261   70401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 03:25:48.727365   70401 kubeadm.go:1113] duration metric: took 3.320449214s to wait for elevateKubeSystemPrivileges
	I0205 03:25:48.727403   70401 kubeadm.go:394] duration metric: took 13.194708187s to StartCluster
	I0205 03:25:48.727421   70401 settings.go:142] acquiring lock: {Name:mk2eca847da5ba78f5b041a83e5cfcbdebb0c621 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:25:48.727503   70401 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20363-12788/kubeconfig
	I0205 03:25:48.729326   70401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/kubeconfig: {Name:mkb405c4292c681fd728af1af9684132d7e45754 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:25:48.729623   70401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0205 03:25:48.729632   70401 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.50.77 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0205 03:25:48.729718   70401 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0205 03:25:48.729824   70401 addons.go:69] Setting storage-provisioner=true in profile "kindnet-253147"
	I0205 03:25:48.729832   70401 config.go:182] Loaded profile config "kindnet-253147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 03:25:48.729844   70401 addons.go:238] Setting addon storage-provisioner=true in "kindnet-253147"
	I0205 03:25:48.729861   70401 addons.go:69] Setting default-storageclass=true in profile "kindnet-253147"
	I0205 03:25:48.729877   70401 host.go:66] Checking if "kindnet-253147" exists ...
	I0205 03:25:48.729885   70401 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-253147"
	I0205 03:25:48.730324   70401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:25:48.730364   70401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:25:48.730402   70401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:25:48.730450   70401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:25:48.731267   70401 out.go:177] * Verifying Kubernetes components...
	I0205 03:25:48.732703   70401 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0205 03:25:48.746360   70401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46499
	I0205 03:25:48.746839   70401 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:25:48.747398   70401 main.go:141] libmachine: Using API Version  1
	I0205 03:25:48.747426   70401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:25:48.747846   70401 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:25:48.748091   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetState
	I0205 03:25:48.750460   70401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42331
	I0205 03:25:48.750823   70401 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:25:48.751293   70401 main.go:141] libmachine: Using API Version  1
	I0205 03:25:48.751310   70401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:25:48.751586   70401 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:25:48.751663   70401 addons.go:238] Setting addon default-storageclass=true in "kindnet-253147"
	I0205 03:25:48.751708   70401 host.go:66] Checking if "kindnet-253147" exists ...
	I0205 03:25:48.752022   70401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:25:48.752057   70401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:25:48.752100   70401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:25:48.752135   70401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:25:48.767261   70401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43511
	I0205 03:25:48.767502   70401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33381
	I0205 03:25:48.768284   70401 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:25:48.768327   70401 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:25:48.768851   70401 main.go:141] libmachine: Using API Version  1
	I0205 03:25:48.768885   70401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:25:48.768947   70401 main.go:141] libmachine: Using API Version  1
	I0205 03:25:48.768969   70401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:25:48.769276   70401 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:25:48.769322   70401 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:25:48.769513   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetState
	I0205 03:25:48.769852   70401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:25:48.769898   70401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:25:48.771447   70401 main.go:141] libmachine: (kindnet-253147) Calling .DriverName
	I0205 03:25:48.773381   70401 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0205 03:25:48.774598   70401 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0205 03:25:48.774613   70401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0205 03:25:48.774628   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetSSHHostname
	I0205 03:25:48.778558   70401 main.go:141] libmachine: (kindnet-253147) DBG | domain kindnet-253147 has defined MAC address 52:54:00:d4:5d:9a in network mk-kindnet-253147
	I0205 03:25:48.779052   70401 main.go:141] libmachine: (kindnet-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:5d:9a", ip: ""} in network mk-kindnet-253147: {Iface:virbr4 ExpiryTime:2025-02-05 04:25:22 +0000 UTC Type:0 Mac:52:54:00:d4:5d:9a Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:kindnet-253147 Clientid:01:52:54:00:d4:5d:9a}
	I0205 03:25:48.779078   70401 main.go:141] libmachine: (kindnet-253147) DBG | domain kindnet-253147 has defined IP address 192.168.50.77 and MAC address 52:54:00:d4:5d:9a in network mk-kindnet-253147
	I0205 03:25:48.779232   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetSSHPort
	I0205 03:25:48.779414   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetSSHKeyPath
	I0205 03:25:48.779564   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetSSHUsername
	I0205 03:25:48.779697   70401 sshutil.go:53] new ssh client: &{IP:192.168.50.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/kindnet-253147/id_rsa Username:docker}
	I0205 03:25:48.788583   70401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35397
	I0205 03:25:48.789136   70401 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:25:48.789686   70401 main.go:141] libmachine: Using API Version  1
	I0205 03:25:48.789707   70401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:25:48.790085   70401 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:25:48.790266   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetState
	I0205 03:25:48.792463   70401 main.go:141] libmachine: (kindnet-253147) Calling .DriverName
	I0205 03:25:48.792681   70401 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0205 03:25:48.792695   70401 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0205 03:25:48.792707   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetSSHHostname
	I0205 03:25:48.795639   70401 main.go:141] libmachine: (kindnet-253147) DBG | domain kindnet-253147 has defined MAC address 52:54:00:d4:5d:9a in network mk-kindnet-253147
	I0205 03:25:48.796253   70401 main.go:141] libmachine: (kindnet-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:5d:9a", ip: ""} in network mk-kindnet-253147: {Iface:virbr4 ExpiryTime:2025-02-05 04:25:22 +0000 UTC Type:0 Mac:52:54:00:d4:5d:9a Iaid: IPaddr:192.168.50.77 Prefix:24 Hostname:kindnet-253147 Clientid:01:52:54:00:d4:5d:9a}
	I0205 03:25:48.796283   70401 main.go:141] libmachine: (kindnet-253147) DBG | domain kindnet-253147 has defined IP address 192.168.50.77 and MAC address 52:54:00:d4:5d:9a in network mk-kindnet-253147
	I0205 03:25:48.796320   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetSSHPort
	I0205 03:25:48.796492   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetSSHKeyPath
	I0205 03:25:48.796671   70401 main.go:141] libmachine: (kindnet-253147) Calling .GetSSHUsername
	I0205 03:25:48.796792   70401 sshutil.go:53] new ssh client: &{IP:192.168.50.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/kindnet-253147/id_rsa Username:docker}
	I0205 03:25:48.908835   70401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0205 03:25:48.958649   70401 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0205 03:25:49.212328   70401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0205 03:25:49.212472   70401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0205 03:25:49.388283   70401 start.go:971] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0205 03:25:49.389672   70401 node_ready.go:35] waiting up to 15m0s for node "kindnet-253147" to be "Ready" ...
	I0205 03:25:49.858513   70401 main.go:141] libmachine: Making call to close driver server
	I0205 03:25:49.858540   70401 main.go:141] libmachine: (kindnet-253147) Calling .Close
	I0205 03:25:49.858523   70401 main.go:141] libmachine: Making call to close driver server
	I0205 03:25:49.858606   70401 main.go:141] libmachine: (kindnet-253147) Calling .Close
	I0205 03:25:49.858825   70401 main.go:141] libmachine: Successfully made call to close driver server
	I0205 03:25:49.858841   70401 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 03:25:49.858849   70401 main.go:141] libmachine: Making call to close driver server
	I0205 03:25:49.858856   70401 main.go:141] libmachine: (kindnet-253147) Calling .Close
	I0205 03:25:49.858897   70401 main.go:141] libmachine: Successfully made call to close driver server
	I0205 03:25:49.858920   70401 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 03:25:49.858938   70401 main.go:141] libmachine: Making call to close driver server
	I0205 03:25:49.858956   70401 main.go:141] libmachine: (kindnet-253147) Calling .Close
	I0205 03:25:49.859073   70401 main.go:141] libmachine: Successfully made call to close driver server
	I0205 03:25:49.859090   70401 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 03:25:49.859272   70401 main.go:141] libmachine: (kindnet-253147) DBG | Closing plugin on server side
	I0205 03:25:49.859289   70401 main.go:141] libmachine: Successfully made call to close driver server
	I0205 03:25:49.859302   70401 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 03:25:49.870728   70401 main.go:141] libmachine: Making call to close driver server
	I0205 03:25:49.870751   70401 main.go:141] libmachine: (kindnet-253147) Calling .Close
	I0205 03:25:49.870970   70401 main.go:141] libmachine: Successfully made call to close driver server
	I0205 03:25:49.871009   70401 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 03:25:49.870985   70401 main.go:141] libmachine: (kindnet-253147) DBG | Closing plugin on server side
	I0205 03:25:49.872243   70401 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0205 03:25:49.873235   70401 addons.go:514] duration metric: took 1.143519254s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0205 03:25:49.893021   70401 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-253147" context rescaled to 1 replicas
	I0205 03:25:51.393068   70401 node_ready.go:53] node "kindnet-253147" has status "Ready":"False"
	I0205 03:25:49.840293   68832 pod_ready.go:103] pod "metrics-server-f79f97bbb-k9q9v" in "kube-system" namespace has status "Ready":"False"
	I0205 03:25:52.338405   68832 pod_ready.go:103] pod "metrics-server-f79f97bbb-k9q9v" in "kube-system" namespace has status "Ready":"False"
	I0205 03:25:50.188302   64850 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0205 03:25:50.188480   64850 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0205 03:25:53.393627   70401 node_ready.go:53] node "kindnet-253147" has status "Ready":"False"
	I0205 03:25:55.445161   70401 node_ready.go:53] node "kindnet-253147" has status "Ready":"False"
	I0205 03:25:54.837419   68832 pod_ready.go:103] pod "metrics-server-f79f97bbb-k9q9v" in "kube-system" namespace has status "Ready":"False"
	I0205 03:25:56.846200   68832 pod_ready.go:103] pod "metrics-server-f79f97bbb-k9q9v" in "kube-system" namespace has status "Ready":"False"
	I0205 03:25:57.892864   70401 node_ready.go:53] node "kindnet-253147" has status "Ready":"False"
	I0205 03:26:00.392525   70401 node_ready.go:53] node "kindnet-253147" has status "Ready":"False"
	I0205 03:26:02.392748   70401 node_ready.go:53] node "kindnet-253147" has status "Ready":"False"
	I0205 03:25:59.337544   68832 pod_ready.go:103] pod "metrics-server-f79f97bbb-k9q9v" in "kube-system" namespace has status "Ready":"False"
	I0205 03:26:01.338221   68832 pod_ready.go:103] pod "metrics-server-f79f97bbb-k9q9v" in "kube-system" namespace has status "Ready":"False"
	I0205 03:26:04.394013   70401 node_ready.go:53] node "kindnet-253147" has status "Ready":"False"
	I0205 03:26:04.894915   70401 node_ready.go:49] node "kindnet-253147" has status "Ready":"True"
	I0205 03:26:04.894940   70401 node_ready.go:38] duration metric: took 15.505240224s for node "kindnet-253147" to be "Ready" ...
	I0205 03:26:04.894952   70401 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0205 03:26:04.901432   70401 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-4gstv" in "kube-system" namespace to be "Ready" ...
	I0205 03:26:05.911633   70401 pod_ready.go:93] pod "coredns-668d6bf9bc-4gstv" in "kube-system" namespace has status "Ready":"True"
	I0205 03:26:05.911662   70401 pod_ready.go:82] duration metric: took 1.010206389s for pod "coredns-668d6bf9bc-4gstv" in "kube-system" namespace to be "Ready" ...
	I0205 03:26:05.911675   70401 pod_ready.go:79] waiting up to 15m0s for pod "etcd-kindnet-253147" in "kube-system" namespace to be "Ready" ...
	I0205 03:26:05.915358   70401 pod_ready.go:93] pod "etcd-kindnet-253147" in "kube-system" namespace has status "Ready":"True"
	I0205 03:26:05.915386   70401 pod_ready.go:82] duration metric: took 3.703609ms for pod "etcd-kindnet-253147" in "kube-system" namespace to be "Ready" ...
	I0205 03:26:05.915403   70401 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-kindnet-253147" in "kube-system" namespace to be "Ready" ...
	I0205 03:26:05.919213   70401 pod_ready.go:93] pod "kube-apiserver-kindnet-253147" in "kube-system" namespace has status "Ready":"True"
	I0205 03:26:05.919238   70401 pod_ready.go:82] duration metric: took 3.826765ms for pod "kube-apiserver-kindnet-253147" in "kube-system" namespace to be "Ready" ...
	I0205 03:26:05.919252   70401 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-kindnet-253147" in "kube-system" namespace to be "Ready" ...
	I0205 03:26:05.923299   70401 pod_ready.go:93] pod "kube-controller-manager-kindnet-253147" in "kube-system" namespace has status "Ready":"True"
	I0205 03:26:05.923319   70401 pod_ready.go:82] duration metric: took 4.05884ms for pod "kube-controller-manager-kindnet-253147" in "kube-system" namespace to be "Ready" ...
	I0205 03:26:05.923331   70401 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-7grzr" in "kube-system" namespace to be "Ready" ...
	I0205 03:26:06.094051   70401 pod_ready.go:93] pod "kube-proxy-7grzr" in "kube-system" namespace has status "Ready":"True"
	I0205 03:26:06.094074   70401 pod_ready.go:82] duration metric: took 170.736748ms for pod "kube-proxy-7grzr" in "kube-system" namespace to be "Ready" ...
	I0205 03:26:06.094089   70401 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-kindnet-253147" in "kube-system" namespace to be "Ready" ...
	I0205 03:26:06.494151   70401 pod_ready.go:93] pod "kube-scheduler-kindnet-253147" in "kube-system" namespace has status "Ready":"True"
	I0205 03:26:06.494177   70401 pod_ready.go:82] duration metric: took 400.080326ms for pod "kube-scheduler-kindnet-253147" in "kube-system" namespace to be "Ready" ...
	I0205 03:26:06.494190   70401 pod_ready.go:39] duration metric: took 1.599221145s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0205 03:26:06.494209   70401 api_server.go:52] waiting for apiserver process to appear ...
	I0205 03:26:06.494264   70401 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:26:06.511909   70401 api_server.go:72] duration metric: took 17.782245643s to wait for apiserver process to appear ...
	I0205 03:26:06.511938   70401 api_server.go:88] waiting for apiserver healthz status ...
	I0205 03:26:06.511958   70401 api_server.go:253] Checking apiserver healthz at https://192.168.50.77:8443/healthz ...
	I0205 03:26:06.516847   70401 api_server.go:279] https://192.168.50.77:8443/healthz returned 200:
	ok
	I0205 03:26:06.517771   70401 api_server.go:141] control plane version: v1.32.1
	I0205 03:26:06.517793   70401 api_server.go:131] duration metric: took 5.848423ms to wait for apiserver health ...
	I0205 03:26:06.517801   70401 system_pods.go:43] waiting for kube-system pods to appear ...
	I0205 03:26:06.695138   70401 system_pods.go:59] 8 kube-system pods found
	I0205 03:26:06.695186   70401 system_pods.go:61] "coredns-668d6bf9bc-4gstv" [c0fa1918-640c-4dd9-bd15-5195b9794b39] Running
	I0205 03:26:06.695197   70401 system_pods.go:61] "etcd-kindnet-253147" [4f3cb9c4-d820-40c1-b06e-1a34796c8b4b] Running
	I0205 03:26:06.695204   70401 system_pods.go:61] "kindnet-jlqlj" [3c9d6c91-ceff-47a7-90ff-5a97e567dea5] Running
	I0205 03:26:06.695210   70401 system_pods.go:61] "kube-apiserver-kindnet-253147" [59c47da9-4336-4699-b915-0cc264e081c0] Running
	I0205 03:26:06.695215   70401 system_pods.go:61] "kube-controller-manager-kindnet-253147" [1d9a5075-8c93-48ad-9574-d7c1462171d8] Running
	I0205 03:26:06.695223   70401 system_pods.go:61] "kube-proxy-7grzr" [ed76a230-3935-41e9-8a06-cb1283172c68] Running
	I0205 03:26:06.695228   70401 system_pods.go:61] "kube-scheduler-kindnet-253147" [2bff5571-7346-4e66-940a-0b620ef01895] Running
	I0205 03:26:06.695233   70401 system_pods.go:61] "storage-provisioner" [ce65d45e-df0a-4d47-a17e-41f09bbd75e5] Running
	I0205 03:26:06.695241   70401 system_pods.go:74] duration metric: took 177.434125ms to wait for pod list to return data ...
	I0205 03:26:06.695261   70401 default_sa.go:34] waiting for default service account to be created ...
	I0205 03:26:06.892956   70401 default_sa.go:45] found service account: "default"
	I0205 03:26:06.892987   70401 default_sa.go:55] duration metric: took 197.719468ms for default service account to be created ...
	I0205 03:26:06.892999   70401 system_pods.go:116] waiting for k8s-apps to be running ...
	I0205 03:26:07.094401   70401 system_pods.go:86] 8 kube-system pods found
	I0205 03:26:07.094433   70401 system_pods.go:89] "coredns-668d6bf9bc-4gstv" [c0fa1918-640c-4dd9-bd15-5195b9794b39] Running
	I0205 03:26:07.094441   70401 system_pods.go:89] "etcd-kindnet-253147" [4f3cb9c4-d820-40c1-b06e-1a34796c8b4b] Running
	I0205 03:26:07.094447   70401 system_pods.go:89] "kindnet-jlqlj" [3c9d6c91-ceff-47a7-90ff-5a97e567dea5] Running
	I0205 03:26:07.094453   70401 system_pods.go:89] "kube-apiserver-kindnet-253147" [59c47da9-4336-4699-b915-0cc264e081c0] Running
	I0205 03:26:07.094459   70401 system_pods.go:89] "kube-controller-manager-kindnet-253147" [1d9a5075-8c93-48ad-9574-d7c1462171d8] Running
	I0205 03:26:07.094464   70401 system_pods.go:89] "kube-proxy-7grzr" [ed76a230-3935-41e9-8a06-cb1283172c68] Running
	I0205 03:26:07.094469   70401 system_pods.go:89] "kube-scheduler-kindnet-253147" [2bff5571-7346-4e66-940a-0b620ef01895] Running
	I0205 03:26:07.094474   70401 system_pods.go:89] "storage-provisioner" [ce65d45e-df0a-4d47-a17e-41f09bbd75e5] Running
	I0205 03:26:07.094483   70401 system_pods.go:126] duration metric: took 201.476128ms to wait for k8s-apps to be running ...
	I0205 03:26:07.094492   70401 system_svc.go:44] waiting for kubelet service to be running ....
	I0205 03:26:07.094546   70401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0205 03:26:07.108154   70401 system_svc.go:56] duration metric: took 13.652032ms WaitForService to wait for kubelet
	I0205 03:26:07.108186   70401 kubeadm.go:582] duration metric: took 18.378526411s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0205 03:26:07.108224   70401 node_conditions.go:102] verifying NodePressure condition ...
	I0205 03:26:07.295059   70401 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0205 03:26:07.295099   70401 node_conditions.go:123] node cpu capacity is 2
	I0205 03:26:07.295114   70401 node_conditions.go:105] duration metric: took 186.884224ms to run NodePressure ...
	I0205 03:26:07.295128   70401 start.go:241] waiting for startup goroutines ...
	I0205 03:26:07.295140   70401 start.go:246] waiting for cluster config update ...
	I0205 03:26:07.295156   70401 start.go:255] writing updated cluster config ...
	I0205 03:26:07.295513   70401 ssh_runner.go:195] Run: rm -f paused
	I0205 03:26:07.350007   70401 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0205 03:26:07.351890   70401 out.go:177] * Done! kubectl is now configured to use "kindnet-253147" cluster and "default" namespace by default
	I0205 03:26:03.839589   68832 pod_ready.go:103] pod "metrics-server-f79f97bbb-k9q9v" in "kube-system" namespace has status "Ready":"False"
	I0205 03:26:05.839762   68832 pod_ready.go:103] pod "metrics-server-f79f97bbb-k9q9v" in "kube-system" namespace has status "Ready":"False"
	I0205 03:26:08.338719   68832 pod_ready.go:103] pod "metrics-server-f79f97bbb-k9q9v" in "kube-system" namespace has status "Ready":"False"
	I0205 03:26:10.837607   68832 pod_ready.go:103] pod "metrics-server-f79f97bbb-k9q9v" in "kube-system" namespace has status "Ready":"False"
	I0205 03:26:12.838537   68832 pod_ready.go:103] pod "metrics-server-f79f97bbb-k9q9v" in "kube-system" namespace has status "Ready":"False"
	I0205 03:26:15.338380   68832 pod_ready.go:103] pod "metrics-server-f79f97bbb-k9q9v" in "kube-system" namespace has status "Ready":"False"
	I0205 03:26:17.840252   68832 pod_ready.go:103] pod "metrics-server-f79f97bbb-k9q9v" in "kube-system" namespace has status "Ready":"False"
	I0205 03:26:19.840942   68832 pod_ready.go:103] pod "metrics-server-f79f97bbb-k9q9v" in "kube-system" namespace has status "Ready":"False"
	I0205 03:26:22.337864   68832 pod_ready.go:103] pod "metrics-server-f79f97bbb-k9q9v" in "kube-system" namespace has status "Ready":"False"
	I0205 03:26:24.837663   68832 pod_ready.go:103] pod "metrics-server-f79f97bbb-k9q9v" in "kube-system" namespace has status "Ready":"False"
	I0205 03:26:26.838278   68832 pod_ready.go:103] pod "metrics-server-f79f97bbb-k9q9v" in "kube-system" namespace has status "Ready":"False"
	I0205 03:26:30.188344   64850 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0205 03:26:30.188671   64850 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0205 03:26:30.188700   64850 kubeadm.go:310] 
	I0205 03:26:30.188744   64850 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0205 03:26:30.188800   64850 kubeadm.go:310] 		timed out waiting for the condition
	I0205 03:26:30.188809   64850 kubeadm.go:310] 
	I0205 03:26:30.188858   64850 kubeadm.go:310] 	This error is likely caused by:
	I0205 03:26:30.188898   64850 kubeadm.go:310] 		- The kubelet is not running
	I0205 03:26:30.188985   64850 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0205 03:26:30.188994   64850 kubeadm.go:310] 
	I0205 03:26:30.189183   64850 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0205 03:26:30.189262   64850 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0205 03:26:30.189315   64850 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0205 03:26:30.189328   64850 kubeadm.go:310] 
	I0205 03:26:30.189479   64850 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0205 03:26:30.189604   64850 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0205 03:26:30.189616   64850 kubeadm.go:310] 
	I0205 03:26:30.189794   64850 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0205 03:26:30.189910   64850 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0205 03:26:30.190015   64850 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0205 03:26:30.190114   64850 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0205 03:26:30.190170   64850 kubeadm.go:310] 
	I0205 03:26:30.190330   64850 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0205 03:26:30.190446   64850 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0205 03:26:30.190622   64850 kubeadm.go:394] duration metric: took 7m57.462882999s to StartCluster
	I0205 03:26:30.190638   64850 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0205 03:26:30.190670   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0205 03:26:30.190724   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0205 03:26:30.239529   64850 cri.go:89] found id: ""
	I0205 03:26:30.239563   64850 logs.go:282] 0 containers: []
	W0205 03:26:30.239575   64850 logs.go:284] No container was found matching "kube-apiserver"
	I0205 03:26:30.239585   64850 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0205 03:26:30.239655   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0205 03:26:30.280172   64850 cri.go:89] found id: ""
	I0205 03:26:30.280208   64850 logs.go:282] 0 containers: []
	W0205 03:26:30.280220   64850 logs.go:284] No container was found matching "etcd"
	I0205 03:26:30.280229   64850 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0205 03:26:30.280297   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0205 03:26:30.334201   64850 cri.go:89] found id: ""
	I0205 03:26:30.334228   64850 logs.go:282] 0 containers: []
	W0205 03:26:30.334238   64850 logs.go:284] No container was found matching "coredns"
	I0205 03:26:30.334250   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0205 03:26:30.334310   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0205 03:26:30.376499   64850 cri.go:89] found id: ""
	I0205 03:26:30.376525   64850 logs.go:282] 0 containers: []
	W0205 03:26:30.376532   64850 logs.go:284] No container was found matching "kube-scheduler"
	I0205 03:26:30.376539   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0205 03:26:30.376600   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0205 03:26:30.419583   64850 cri.go:89] found id: ""
	I0205 03:26:30.419608   64850 logs.go:282] 0 containers: []
	W0205 03:26:30.419616   64850 logs.go:284] No container was found matching "kube-proxy"
	I0205 03:26:30.419622   64850 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0205 03:26:30.419681   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0205 03:26:30.457014   64850 cri.go:89] found id: ""
	I0205 03:26:30.457049   64850 logs.go:282] 0 containers: []
	W0205 03:26:30.457059   64850 logs.go:284] No container was found matching "kube-controller-manager"
	I0205 03:26:30.457067   64850 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0205 03:26:30.457121   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0205 03:26:30.501068   64850 cri.go:89] found id: ""
	I0205 03:26:30.501091   64850 logs.go:282] 0 containers: []
	W0205 03:26:30.501098   64850 logs.go:284] No container was found matching "kindnet"
	I0205 03:26:30.501104   64850 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0205 03:26:30.501161   64850 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0205 03:26:30.538384   64850 cri.go:89] found id: ""
	I0205 03:26:30.538420   64850 logs.go:282] 0 containers: []
	W0205 03:26:30.538431   64850 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0205 03:26:30.538443   64850 logs.go:123] Gathering logs for describe nodes ...
	I0205 03:26:30.538460   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0205 03:26:30.627025   64850 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0205 03:26:30.627055   64850 logs.go:123] Gathering logs for CRI-O ...
	I0205 03:26:30.627072   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0205 03:26:30.749529   64850 logs.go:123] Gathering logs for container status ...
	I0205 03:26:30.749561   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0205 03:26:30.794162   64850 logs.go:123] Gathering logs for kubelet ...
	I0205 03:26:30.794188   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0205 03:26:30.849515   64850 logs.go:123] Gathering logs for dmesg ...
	I0205 03:26:30.849555   64850 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0205 03:26:30.865114   64850 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0205 03:26:30.865167   64850 out.go:270] * 
	W0205 03:26:30.865227   64850 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0205 03:26:30.865244   64850 out.go:270] * 
	W0205 03:26:30.866482   64850 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0205 03:26:30.869500   64850 out.go:201] 
	W0205 03:26:30.870525   64850 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0205 03:26:30.870589   64850 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0205 03:26:30.870619   64850 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0205 03:26:30.871955   64850 out.go:201] 
	
	
	==> CRI-O <==
	Feb 05 03:26:31 old-k8s-version-191773 crio[628]: time="2025-02-05 03:26:31.956103220Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738725991956069169,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=64d9f992-238a-486c-acf4-39630bbf379e name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 03:26:31 old-k8s-version-191773 crio[628]: time="2025-02-05 03:26:31.956893524Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fef3d808-1488-4f8d-b6d9-4240eb0b1ac9 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:26:31 old-k8s-version-191773 crio[628]: time="2025-02-05 03:26:31.956940597Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fef3d808-1488-4f8d-b6d9-4240eb0b1ac9 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:26:31 old-k8s-version-191773 crio[628]: time="2025-02-05 03:26:31.956970075Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=fef3d808-1488-4f8d-b6d9-4240eb0b1ac9 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:26:31 old-k8s-version-191773 crio[628]: time="2025-02-05 03:26:31.993882371Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d634c66a-a154-4106-b840-98e6578332d3 name=/runtime.v1.RuntimeService/Version
	Feb 05 03:26:31 old-k8s-version-191773 crio[628]: time="2025-02-05 03:26:31.994035763Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d634c66a-a154-4106-b840-98e6578332d3 name=/runtime.v1.RuntimeService/Version
	Feb 05 03:26:31 old-k8s-version-191773 crio[628]: time="2025-02-05 03:26:31.998384826Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=358acb83-8aa6-4354-97ce-3e55e5ab85ae name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 03:26:31 old-k8s-version-191773 crio[628]: time="2025-02-05 03:26:31.999071011Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738725991999039105,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=358acb83-8aa6-4354-97ce-3e55e5ab85ae name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 03:26:31 old-k8s-version-191773 crio[628]: time="2025-02-05 03:26:31.999896842Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a7e5d30e-aa3c-4737-a3d8-b3e5f12a1299 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:26:32 old-k8s-version-191773 crio[628]: time="2025-02-05 03:26:32.000100462Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a7e5d30e-aa3c-4737-a3d8-b3e5f12a1299 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:26:32 old-k8s-version-191773 crio[628]: time="2025-02-05 03:26:32.000210852Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a7e5d30e-aa3c-4737-a3d8-b3e5f12a1299 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:26:32 old-k8s-version-191773 crio[628]: time="2025-02-05 03:26:32.036266633Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5f6eee42-7d8c-424c-aaff-8f2faa17a831 name=/runtime.v1.RuntimeService/Version
	Feb 05 03:26:32 old-k8s-version-191773 crio[628]: time="2025-02-05 03:26:32.036366040Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5f6eee42-7d8c-424c-aaff-8f2faa17a831 name=/runtime.v1.RuntimeService/Version
	Feb 05 03:26:32 old-k8s-version-191773 crio[628]: time="2025-02-05 03:26:32.037457319Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e6c727fc-35fe-404e-ac20-b04cecbe00ad name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 03:26:32 old-k8s-version-191773 crio[628]: time="2025-02-05 03:26:32.037899245Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738725992037867131,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e6c727fc-35fe-404e-ac20-b04cecbe00ad name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 03:26:32 old-k8s-version-191773 crio[628]: time="2025-02-05 03:26:32.038579578Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bf3f1b3a-3e55-4099-b5d3-c78da1036b9b name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:26:32 old-k8s-version-191773 crio[628]: time="2025-02-05 03:26:32.038665715Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bf3f1b3a-3e55-4099-b5d3-c78da1036b9b name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:26:32 old-k8s-version-191773 crio[628]: time="2025-02-05 03:26:32.038721215Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=bf3f1b3a-3e55-4099-b5d3-c78da1036b9b name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:26:32 old-k8s-version-191773 crio[628]: time="2025-02-05 03:26:32.072481915Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f0881ddb-a04b-4522-8e70-61bdf76ebd0a name=/runtime.v1.RuntimeService/Version
	Feb 05 03:26:32 old-k8s-version-191773 crio[628]: time="2025-02-05 03:26:32.072558881Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f0881ddb-a04b-4522-8e70-61bdf76ebd0a name=/runtime.v1.RuntimeService/Version
	Feb 05 03:26:32 old-k8s-version-191773 crio[628]: time="2025-02-05 03:26:32.073814772Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=011a93d4-62b9-494b-88f9-4f527b4f876c name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 03:26:32 old-k8s-version-191773 crio[628]: time="2025-02-05 03:26:32.074269317Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738725992074245044,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=011a93d4-62b9-494b-88f9-4f527b4f876c name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 03:26:32 old-k8s-version-191773 crio[628]: time="2025-02-05 03:26:32.074906540Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=299c0560-acbc-4940-9d9b-1348cf7fb894 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:26:32 old-k8s-version-191773 crio[628]: time="2025-02-05 03:26:32.074957955Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=299c0560-acbc-4940-9d9b-1348cf7fb894 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:26:32 old-k8s-version-191773 crio[628]: time="2025-02-05 03:26:32.074990069Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=299c0560-acbc-4940-9d9b-1348cf7fb894 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Feb 5 03:18] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053905] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040287] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.006200] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.096613] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.500084] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.640447] systemd-fstab-generator[555]: Ignoring "noauto" option for root device
	[  +0.062173] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064592] systemd-fstab-generator[567]: Ignoring "noauto" option for root device
	[  +0.179648] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.107931] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.224718] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +6.148854] systemd-fstab-generator[875]: Ignoring "noauto" option for root device
	[  +0.061424] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.976913] systemd-fstab-generator[1000]: Ignoring "noauto" option for root device
	[ +13.604699] kauditd_printk_skb: 46 callbacks suppressed
	[Feb 5 03:22] systemd-fstab-generator[5042]: Ignoring "noauto" option for root device
	[Feb 5 03:24] systemd-fstab-generator[5320]: Ignoring "noauto" option for root device
	[  +0.067795] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 03:26:32 up 8 min,  0 users,  load average: 0.02, 0.14, 0.09
	Linux old-k8s-version-191773 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Feb 05 03:26:30 old-k8s-version-191773 kubelet[5498]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000b62ee0, 0xc00009e0c0)
	Feb 05 03:26:30 old-k8s-version-191773 kubelet[5498]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:218
	Feb 05 03:26:30 old-k8s-version-191773 kubelet[5498]: created by k8s.io/kubernetes/pkg/kubelet.NewMainKubelet
	Feb 05 03:26:30 old-k8s-version-191773 kubelet[5498]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet.go:439 +0x6849
	Feb 05 03:26:30 old-k8s-version-191773 kubelet[5498]: goroutine 149 [syscall]:
	Feb 05 03:26:30 old-k8s-version-191773 kubelet[5498]: syscall.Syscall6(0xe8, 0xc, 0xc000d0fb6c, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0, 0x0, 0x0)
	Feb 05 03:26:30 old-k8s-version-191773 kubelet[5498]:         /usr/local/go/src/syscall/asm_linux_amd64.s:41 +0x5
	Feb 05 03:26:30 old-k8s-version-191773 kubelet[5498]: k8s.io/kubernetes/vendor/golang.org/x/sys/unix.EpollWait(0xc, 0xc000d0fb6c, 0x7, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0)
	Feb 05 03:26:30 old-k8s-version-191773 kubelet[5498]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/sys/unix/zsyscall_linux_amd64.go:76 +0x72
	Feb 05 03:26:30 old-k8s-version-191773 kubelet[5498]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*fdPoller).wait(0xc000758e20, 0x0, 0x0, 0x0)
	Feb 05 03:26:30 old-k8s-version-191773 kubelet[5498]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify_poller.go:86 +0x91
	Feb 05 03:26:30 old-k8s-version-191773 kubelet[5498]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*Watcher).readEvents(0xc000205a90)
	Feb 05 03:26:30 old-k8s-version-191773 kubelet[5498]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:192 +0x206
	Feb 05 03:26:30 old-k8s-version-191773 kubelet[5498]: created by k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.NewWatcher
	Feb 05 03:26:30 old-k8s-version-191773 kubelet[5498]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:59 +0x1a8
	Feb 05 03:26:30 old-k8s-version-191773 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 05 03:26:30 old-k8s-version-191773 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 05 03:26:30 old-k8s-version-191773 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Feb 05 03:26:30 old-k8s-version-191773 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 05 03:26:30 old-k8s-version-191773 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 05 03:26:31 old-k8s-version-191773 kubelet[5562]: I0205 03:26:31.031250    5562 server.go:416] Version: v1.20.0
	Feb 05 03:26:31 old-k8s-version-191773 kubelet[5562]: I0205 03:26:31.031551    5562 server.go:837] Client rotation is on, will bootstrap in background
	Feb 05 03:26:31 old-k8s-version-191773 kubelet[5562]: I0205 03:26:31.034516    5562 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 05 03:26:31 old-k8s-version-191773 kubelet[5562]: I0205 03:26:31.035510    5562 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Feb 05 03:26:31 old-k8s-version-191773 kubelet[5562]: W0205 03:26:31.035539    5562 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-191773 -n old-k8s-version-191773
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-191773 -n old-k8s-version-191773: exit status 2 (226.268548ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-191773" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (507.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:26:54.513939   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/no-preload-788344/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:29:09.273478   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/functional-910650/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:29:10.654768   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/no-preload-788344/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:29:38.355338   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/no-preload-788344/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:29:40.947129   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/auto-253147/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:29:40.953520   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/auto-253147/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:29:40.964892   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/auto-253147/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:29:40.986230   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/auto-253147/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:29:41.027608   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/auto-253147/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:29:41.109061   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/auto-253147/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:29:41.270552   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/auto-253147/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:29:41.592763   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/auto-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:29:42.234570   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/auto-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:29:46.078200   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/auto-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:29:51.200116   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/auto-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:30:01.442279   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/auto-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
I0205 03:30:12.683310   19989 config.go:182] Loaded profile config "flannel-253147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:31:02.885557   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/auto-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:31:04.764685   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:31:07.366754   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kindnet-253147/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:31:07.373104   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kindnet-253147/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:31:07.384463   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kindnet-253147/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:31:07.405866   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kindnet-253147/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:31:07.447284   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kindnet-253147/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:31:07.528724   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kindnet-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:31:07.689997   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kindnet-253147/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:31:08.012243   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kindnet-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:31:08.654566   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kindnet-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:31:09.936388   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kindnet-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:31:12.498591   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kindnet-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:31:17.620113   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kindnet-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:31:27.862208   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kindnet-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:31:48.343568   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kindnet-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:32:24.807354   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/auto-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:32:29.305377   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kindnet-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:32:53.054812   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/default-k8s-diff-port-568677/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:32:53.061156   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/default-k8s-diff-port-568677/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:32:53.072530   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/default-k8s-diff-port-568677/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:32:53.093926   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/default-k8s-diff-port-568677/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:32:53.135313   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/default-k8s-diff-port-568677/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:32:53.216755   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/default-k8s-diff-port-568677/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:32:53.378283   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/default-k8s-diff-port-568677/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:32:53.699756   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/default-k8s-diff-port-568677/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:32:54.341781   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/default-k8s-diff-port-568677/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:32:55.623827   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/default-k8s-diff-port-568677/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:32:58.185797   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/default-k8s-diff-port-568677/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:32:58.896725   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/calico-253147/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:32:58.903044   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/calico-253147/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:32:58.914349   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/calico-253147/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:32:58.935650   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/calico-253147/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:32:58.976983   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/calico-253147/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:32:59.058358   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/calico-253147/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:32:59.219892   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/calico-253147/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:32:59.541701   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/calico-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:33:00.183306   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/calico-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:33:01.465476   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/calico-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:33:03.307359   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/default-k8s-diff-port-568677/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:33:04.027389   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/calico-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:33:09.149202   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/calico-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:33:13.549430   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/default-k8s-diff-port-568677/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:33:19.391374   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/calico-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:33:34.030873   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/default-k8s-diff-port-568677/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:33:39.873539   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/calico-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:33:51.226869   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kindnet-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:34:07.844372   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:34:09.273002   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/functional-910650/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:34:10.655219   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/no-preload-788344/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:34:14.992360   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/default-k8s-diff-port-568677/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:34:20.835814   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/calico-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:34:40.947093   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/auto-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:34:43.476570   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/custom-flannel-253147/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:34:43.482914   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/custom-flannel-253147/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:34:43.494224   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/custom-flannel-253147/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:34:43.515581   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/custom-flannel-253147/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:34:43.556972   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/custom-flannel-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:34:43.639062   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/custom-flannel-253147/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:34:43.800614   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/custom-flannel-253147/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:34:44.122313   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/custom-flannel-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:34:44.763952   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/custom-flannel-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:34:46.045262   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/custom-flannel-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:34:48.606677   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/custom-flannel-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:34:53.728710   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/custom-flannel-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:35:03.970935   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/custom-flannel-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:35:06.055368   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/flannel-253147/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:35:06.061722   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/flannel-253147/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:35:06.073052   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/flannel-253147/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:35:06.094415   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/flannel-253147/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:35:06.135809   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/flannel-253147/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:35:06.217276   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/flannel-253147/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:35:06.378823   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/flannel-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:35:06.700338   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/flannel-253147/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:35:07.342399   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/flannel-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:35:08.624602   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/flannel-253147/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:35:08.649072   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/auto-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:35:11.186309   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/flannel-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:35:16.308520   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/flannel-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:35:24.453274   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/custom-flannel-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:35:26.550219   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/flannel-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
start_stop_delete_test.go:272: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-191773 -n old-k8s-version-191773
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-191773 -n old-k8s-version-191773: exit status 2 (214.342504ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "old-k8s-version-191773" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-191773 -n old-k8s-version-191773
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-191773 -n old-k8s-version-191773: exit status 2 (211.658217ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-191773 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-253147                         | enable-default-cni-253147 | jenkins | v1.35.0 | 05 Feb 25 03:32 UTC | 05 Feb 25 03:32 UTC |
	|         | sudo iptables -t nat -L -n -v                        |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-253147                         | enable-default-cni-253147 | jenkins | v1.35.0 | 05 Feb 25 03:32 UTC | 05 Feb 25 03:32 UTC |
	|         | sudo systemctl status kubelet                        |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-253147                         | enable-default-cni-253147 | jenkins | v1.35.0 | 05 Feb 25 03:32 UTC | 05 Feb 25 03:32 UTC |
	|         | sudo systemctl cat kubelet                           |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-253147                         | enable-default-cni-253147 | jenkins | v1.35.0 | 05 Feb 25 03:32 UTC | 05 Feb 25 03:32 UTC |
	|         | sudo journalctl -xeu kubelet                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-253147                         | enable-default-cni-253147 | jenkins | v1.35.0 | 05 Feb 25 03:32 UTC | 05 Feb 25 03:32 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-253147                         | enable-default-cni-253147 | jenkins | v1.35.0 | 05 Feb 25 03:32 UTC | 05 Feb 25 03:32 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-253147                         | enable-default-cni-253147 | jenkins | v1.35.0 | 05 Feb 25 03:32 UTC |                     |
	|         | sudo systemctl status docker                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-253147                         | enable-default-cni-253147 | jenkins | v1.35.0 | 05 Feb 25 03:32 UTC | 05 Feb 25 03:32 UTC |
	|         | sudo systemctl cat docker                            |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-253147                         | enable-default-cni-253147 | jenkins | v1.35.0 | 05 Feb 25 03:32 UTC | 05 Feb 25 03:32 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-253147                         | enable-default-cni-253147 | jenkins | v1.35.0 | 05 Feb 25 03:32 UTC |                     |
	|         | sudo docker system info                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-253147                         | enable-default-cni-253147 | jenkins | v1.35.0 | 05 Feb 25 03:32 UTC |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | cri-docker --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-253147                         | enable-default-cni-253147 | jenkins | v1.35.0 | 05 Feb 25 03:32 UTC | 05 Feb 25 03:32 UTC |
	|         | sudo systemctl cat cri-docker                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-253147 sudo cat                | enable-default-cni-253147 | jenkins | v1.35.0 | 05 Feb 25 03:32 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-253147 sudo cat                | enable-default-cni-253147 | jenkins | v1.35.0 | 05 Feb 25 03:32 UTC | 05 Feb 25 03:32 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-253147                         | enable-default-cni-253147 | jenkins | v1.35.0 | 05 Feb 25 03:32 UTC | 05 Feb 25 03:32 UTC |
	|         | sudo cri-dockerd --version                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-253147                         | enable-default-cni-253147 | jenkins | v1.35.0 | 05 Feb 25 03:32 UTC |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | containerd --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-253147                         | enable-default-cni-253147 | jenkins | v1.35.0 | 05 Feb 25 03:32 UTC | 05 Feb 25 03:32 UTC |
	|         | sudo systemctl cat containerd                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-253147 sudo cat                | enable-default-cni-253147 | jenkins | v1.35.0 | 05 Feb 25 03:32 UTC | 05 Feb 25 03:32 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-253147                         | enable-default-cni-253147 | jenkins | v1.35.0 | 05 Feb 25 03:32 UTC | 05 Feb 25 03:32 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-253147                         | enable-default-cni-253147 | jenkins | v1.35.0 | 05 Feb 25 03:32 UTC | 05 Feb 25 03:32 UTC |
	|         | sudo containerd config dump                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-253147                         | enable-default-cni-253147 | jenkins | v1.35.0 | 05 Feb 25 03:32 UTC | 05 Feb 25 03:32 UTC |
	|         | sudo systemctl status crio                           |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-253147                         | enable-default-cni-253147 | jenkins | v1.35.0 | 05 Feb 25 03:32 UTC | 05 Feb 25 03:32 UTC |
	|         | sudo systemctl cat crio                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-253147                         | enable-default-cni-253147 | jenkins | v1.35.0 | 05 Feb 25 03:32 UTC | 05 Feb 25 03:32 UTC |
	|         | sudo find /etc/crio -type f                          |                           |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                        |                           |         |         |                     |                     |
	|         | \;                                                   |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-253147                         | enable-default-cni-253147 | jenkins | v1.35.0 | 05 Feb 25 03:32 UTC | 05 Feb 25 03:32 UTC |
	|         | sudo crio config                                     |                           |         |         |                     |                     |
	| delete  | -p enable-default-cni-253147                         | enable-default-cni-253147 | jenkins | v1.35.0 | 05 Feb 25 03:32 UTC | 05 Feb 25 03:32 UTC |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/05 03:30:13
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0205 03:30:13.058904   77491 out.go:345] Setting OutFile to fd 1 ...
	I0205 03:30:13.059041   77491 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 03:30:13.059052   77491 out.go:358] Setting ErrFile to fd 2...
	I0205 03:30:13.059059   77491 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 03:30:13.059250   77491 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20363-12788/.minikube/bin
	I0205 03:30:13.059842   77491 out.go:352] Setting JSON to false
	I0205 03:30:13.060925   77491 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":7964,"bootTime":1738718249,"procs":313,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0205 03:30:13.061015   77491 start.go:139] virtualization: kvm guest
	I0205 03:30:13.062790   77491 out.go:177] * [enable-default-cni-253147] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0205 03:30:13.064298   77491 out.go:177]   - MINIKUBE_LOCATION=20363
	I0205 03:30:13.064304   77491 notify.go:220] Checking for updates...
	I0205 03:30:13.066361   77491 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0205 03:30:13.067427   77491 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20363-12788/kubeconfig
	I0205 03:30:13.068416   77491 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20363-12788/.minikube
	I0205 03:30:13.069475   77491 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0205 03:30:13.070547   77491 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0205 03:30:13.071902   77491 config.go:182] Loaded profile config "bridge-253147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 03:30:13.072016   77491 config.go:182] Loaded profile config "flannel-253147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 03:30:13.072156   77491 config.go:182] Loaded profile config "old-k8s-version-191773": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0205 03:30:13.072247   77491 driver.go:394] Setting default libvirt URI to qemu:///system
	I0205 03:30:13.948554   77491 out.go:177] * Using the kvm2 driver based on user configuration
	I0205 03:30:13.949679   77491 start.go:297] selected driver: kvm2
	I0205 03:30:13.949696   77491 start.go:901] validating driver "kvm2" against <nil>
	I0205 03:30:13.949707   77491 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0205 03:30:13.950427   77491 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0205 03:30:13.950526   77491 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20363-12788/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0205 03:30:13.968041   77491 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0205 03:30:13.968159   77491 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	E0205 03:30:13.968502   77491 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0205 03:30:13.968542   77491 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0205 03:30:13.968583   77491 cni.go:84] Creating CNI manager for "bridge"
	I0205 03:30:13.968591   77491 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0205 03:30:13.968684   77491 start.go:340] cluster config:
	{Name:enable-default-cni-253147 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:enable-default-cni-253147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0205 03:30:13.968829   77491 iso.go:125] acquiring lock: {Name:mk486603e8d6546d81b9e7d0a893261360630790 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0205 03:30:13.970557   77491 out.go:177] * Starting "enable-default-cni-253147" primary control-plane node in "enable-default-cni-253147" cluster
	I0205 03:30:11.792298   77242 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0205 03:30:11.792433   77242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:30:11.792476   77242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:30:11.807003   77242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40163
	I0205 03:30:11.807471   77242 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:30:11.807993   77242 main.go:141] libmachine: Using API Version  1
	I0205 03:30:11.808016   77242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:30:11.808363   77242 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:30:11.808572   77242 main.go:141] libmachine: (bridge-253147) Calling .GetMachineName
	I0205 03:30:11.808746   77242 main.go:141] libmachine: (bridge-253147) Calling .DriverName
	I0205 03:30:11.808892   77242 start.go:159] libmachine.API.Create for "bridge-253147" (driver="kvm2")
	I0205 03:30:11.808924   77242 client.go:168] LocalClient.Create starting
	I0205 03:30:11.808967   77242 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem
	I0205 03:30:11.809004   77242 main.go:141] libmachine: Decoding PEM data...
	I0205 03:30:11.809019   77242 main.go:141] libmachine: Parsing certificate...
	I0205 03:30:11.809087   77242 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20363-12788/.minikube/certs/cert.pem
	I0205 03:30:11.809110   77242 main.go:141] libmachine: Decoding PEM data...
	I0205 03:30:11.809127   77242 main.go:141] libmachine: Parsing certificate...
	I0205 03:30:11.809160   77242 main.go:141] libmachine: Running pre-create checks...
	I0205 03:30:11.809172   77242 main.go:141] libmachine: (bridge-253147) Calling .PreCreateCheck
	I0205 03:30:11.809522   77242 main.go:141] libmachine: (bridge-253147) Calling .GetConfigRaw
	I0205 03:30:11.809936   77242 main.go:141] libmachine: Creating machine...
	I0205 03:30:11.809948   77242 main.go:141] libmachine: (bridge-253147) Calling .Create
	I0205 03:30:11.810068   77242 main.go:141] libmachine: (bridge-253147) creating KVM machine...
	I0205 03:30:11.810087   77242 main.go:141] libmachine: (bridge-253147) creating network...
	I0205 03:30:11.812574   77242 main.go:141] libmachine: (bridge-253147) DBG | found existing default KVM network
	I0205 03:30:11.938961   77242 main.go:141] libmachine: (bridge-253147) DBG | I0205 03:30:11.938763   77289 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:e6:1a:05} reservation:<nil>}
	I0205 03:30:11.939948   77242 main.go:141] libmachine: (bridge-253147) DBG | I0205 03:30:11.939863   77289 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00027c820}
	I0205 03:30:11.939991   77242 main.go:141] libmachine: (bridge-253147) DBG | created network xml: 
	I0205 03:30:11.940010   77242 main.go:141] libmachine: (bridge-253147) DBG | <network>
	I0205 03:30:11.940020   77242 main.go:141] libmachine: (bridge-253147) DBG |   <name>mk-bridge-253147</name>
	I0205 03:30:11.940025   77242 main.go:141] libmachine: (bridge-253147) DBG |   <dns enable='no'/>
	I0205 03:30:11.940030   77242 main.go:141] libmachine: (bridge-253147) DBG |   
	I0205 03:30:11.940038   77242 main.go:141] libmachine: (bridge-253147) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0205 03:30:11.940043   77242 main.go:141] libmachine: (bridge-253147) DBG |     <dhcp>
	I0205 03:30:11.940049   77242 main.go:141] libmachine: (bridge-253147) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0205 03:30:11.940060   77242 main.go:141] libmachine: (bridge-253147) DBG |     </dhcp>
	I0205 03:30:11.940064   77242 main.go:141] libmachine: (bridge-253147) DBG |   </ip>
	I0205 03:30:11.940068   77242 main.go:141] libmachine: (bridge-253147) DBG |   
	I0205 03:30:11.940072   77242 main.go:141] libmachine: (bridge-253147) DBG | </network>
	I0205 03:30:11.940079   77242 main.go:141] libmachine: (bridge-253147) DBG | 
	I0205 03:30:12.489028   77242 main.go:141] libmachine: (bridge-253147) DBG | trying to create private KVM network mk-bridge-253147 192.168.50.0/24...
	I0205 03:30:12.565957   77242 main.go:141] libmachine: (bridge-253147) setting up store path in /home/jenkins/minikube-integration/20363-12788/.minikube/machines/bridge-253147 ...
	I0205 03:30:12.565984   77242 main.go:141] libmachine: (bridge-253147) building disk image from file:///home/jenkins/minikube-integration/20363-12788/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0205 03:30:12.565995   77242 main.go:141] libmachine: (bridge-253147) DBG | private KVM network mk-bridge-253147 192.168.50.0/24 created
	I0205 03:30:12.566012   77242 main.go:141] libmachine: (bridge-253147) DBG | I0205 03:30:12.565893   77289 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20363-12788/.minikube
	I0205 03:30:12.566051   77242 main.go:141] libmachine: (bridge-253147) Downloading /home/jenkins/minikube-integration/20363-12788/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20363-12788/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0205 03:30:12.855158   77242 main.go:141] libmachine: (bridge-253147) DBG | I0205 03:30:12.855017   77289 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20363-12788/.minikube/machines/bridge-253147/id_rsa...
	I0205 03:30:12.960059   77242 main.go:141] libmachine: (bridge-253147) DBG | I0205 03:30:12.959920   77289 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20363-12788/.minikube/machines/bridge-253147/bridge-253147.rawdisk...
	I0205 03:30:12.960097   77242 main.go:141] libmachine: (bridge-253147) DBG | Writing magic tar header
	I0205 03:30:12.960113   77242 main.go:141] libmachine: (bridge-253147) DBG | Writing SSH key tar header
	I0205 03:30:12.960133   77242 main.go:141] libmachine: (bridge-253147) DBG | I0205 03:30:12.960037   77289 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20363-12788/.minikube/machines/bridge-253147 ...
	I0205 03:30:12.960152   77242 main.go:141] libmachine: (bridge-253147) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20363-12788/.minikube/machines/bridge-253147
	I0205 03:30:12.960168   77242 main.go:141] libmachine: (bridge-253147) setting executable bit set on /home/jenkins/minikube-integration/20363-12788/.minikube/machines/bridge-253147 (perms=drwx------)
	I0205 03:30:12.960183   77242 main.go:141] libmachine: (bridge-253147) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20363-12788/.minikube/machines
	I0205 03:30:12.960205   77242 main.go:141] libmachine: (bridge-253147) setting executable bit set on /home/jenkins/minikube-integration/20363-12788/.minikube/machines (perms=drwxr-xr-x)
	I0205 03:30:12.960221   77242 main.go:141] libmachine: (bridge-253147) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20363-12788/.minikube
	I0205 03:30:12.960237   77242 main.go:141] libmachine: (bridge-253147) setting executable bit set on /home/jenkins/minikube-integration/20363-12788/.minikube (perms=drwxr-xr-x)
	I0205 03:30:12.960249   77242 main.go:141] libmachine: (bridge-253147) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20363-12788
	I0205 03:30:12.960258   77242 main.go:141] libmachine: (bridge-253147) setting executable bit set on /home/jenkins/minikube-integration/20363-12788 (perms=drwxrwxr-x)
	I0205 03:30:12.960270   77242 main.go:141] libmachine: (bridge-253147) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0205 03:30:12.960287   77242 main.go:141] libmachine: (bridge-253147) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0205 03:30:12.960305   77242 main.go:141] libmachine: (bridge-253147) creating domain...
	I0205 03:30:12.960326   77242 main.go:141] libmachine: (bridge-253147) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0205 03:30:12.960345   77242 main.go:141] libmachine: (bridge-253147) DBG | checking permissions on dir: /home/jenkins
	I0205 03:30:12.960357   77242 main.go:141] libmachine: (bridge-253147) DBG | checking permissions on dir: /home
	I0205 03:30:12.960369   77242 main.go:141] libmachine: (bridge-253147) DBG | skipping /home - not owner
	I0205 03:30:12.961532   77242 main.go:141] libmachine: (bridge-253147) define libvirt domain using xml: 
	I0205 03:30:12.961556   77242 main.go:141] libmachine: (bridge-253147) <domain type='kvm'>
	I0205 03:30:12.961578   77242 main.go:141] libmachine: (bridge-253147)   <name>bridge-253147</name>
	I0205 03:30:12.961587   77242 main.go:141] libmachine: (bridge-253147)   <memory unit='MiB'>3072</memory>
	I0205 03:30:12.961599   77242 main.go:141] libmachine: (bridge-253147)   <vcpu>2</vcpu>
	I0205 03:30:12.961604   77242 main.go:141] libmachine: (bridge-253147)   <features>
	I0205 03:30:12.961611   77242 main.go:141] libmachine: (bridge-253147)     <acpi/>
	I0205 03:30:12.961625   77242 main.go:141] libmachine: (bridge-253147)     <apic/>
	I0205 03:30:12.961647   77242 main.go:141] libmachine: (bridge-253147)     <pae/>
	I0205 03:30:12.961662   77242 main.go:141] libmachine: (bridge-253147)     
	I0205 03:30:12.961670   77242 main.go:141] libmachine: (bridge-253147)   </features>
	I0205 03:30:12.961678   77242 main.go:141] libmachine: (bridge-253147)   <cpu mode='host-passthrough'>
	I0205 03:30:12.961688   77242 main.go:141] libmachine: (bridge-253147)   
	I0205 03:30:12.961699   77242 main.go:141] libmachine: (bridge-253147)   </cpu>
	I0205 03:30:12.961707   77242 main.go:141] libmachine: (bridge-253147)   <os>
	I0205 03:30:12.961713   77242 main.go:141] libmachine: (bridge-253147)     <type>hvm</type>
	I0205 03:30:12.961725   77242 main.go:141] libmachine: (bridge-253147)     <boot dev='cdrom'/>
	I0205 03:30:12.961735   77242 main.go:141] libmachine: (bridge-253147)     <boot dev='hd'/>
	I0205 03:30:12.961768   77242 main.go:141] libmachine: (bridge-253147)     <bootmenu enable='no'/>
	I0205 03:30:12.961793   77242 main.go:141] libmachine: (bridge-253147)   </os>
	I0205 03:30:12.961802   77242 main.go:141] libmachine: (bridge-253147)   <devices>
	I0205 03:30:12.961813   77242 main.go:141] libmachine: (bridge-253147)     <disk type='file' device='cdrom'>
	I0205 03:30:12.961826   77242 main.go:141] libmachine: (bridge-253147)       <source file='/home/jenkins/minikube-integration/20363-12788/.minikube/machines/bridge-253147/boot2docker.iso'/>
	I0205 03:30:12.961835   77242 main.go:141] libmachine: (bridge-253147)       <target dev='hdc' bus='scsi'/>
	I0205 03:30:12.961852   77242 main.go:141] libmachine: (bridge-253147)       <readonly/>
	I0205 03:30:12.961865   77242 main.go:141] libmachine: (bridge-253147)     </disk>
	I0205 03:30:12.961885   77242 main.go:141] libmachine: (bridge-253147)     <disk type='file' device='disk'>
	I0205 03:30:12.961903   77242 main.go:141] libmachine: (bridge-253147)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0205 03:30:12.961921   77242 main.go:141] libmachine: (bridge-253147)       <source file='/home/jenkins/minikube-integration/20363-12788/.minikube/machines/bridge-253147/bridge-253147.rawdisk'/>
	I0205 03:30:12.961933   77242 main.go:141] libmachine: (bridge-253147)       <target dev='hda' bus='virtio'/>
	I0205 03:30:12.961944   77242 main.go:141] libmachine: (bridge-253147)     </disk>
	I0205 03:30:12.961955   77242 main.go:141] libmachine: (bridge-253147)     <interface type='network'>
	I0205 03:30:12.961968   77242 main.go:141] libmachine: (bridge-253147)       <source network='mk-bridge-253147'/>
	I0205 03:30:12.961976   77242 main.go:141] libmachine: (bridge-253147)       <model type='virtio'/>
	I0205 03:30:12.961995   77242 main.go:141] libmachine: (bridge-253147)     </interface>
	I0205 03:30:12.962002   77242 main.go:141] libmachine: (bridge-253147)     <interface type='network'>
	I0205 03:30:12.962012   77242 main.go:141] libmachine: (bridge-253147)       <source network='default'/>
	I0205 03:30:12.962022   77242 main.go:141] libmachine: (bridge-253147)       <model type='virtio'/>
	I0205 03:30:12.962032   77242 main.go:141] libmachine: (bridge-253147)     </interface>
	I0205 03:30:12.962042   77242 main.go:141] libmachine: (bridge-253147)     <serial type='pty'>
	I0205 03:30:12.962054   77242 main.go:141] libmachine: (bridge-253147)       <target port='0'/>
	I0205 03:30:12.962064   77242 main.go:141] libmachine: (bridge-253147)     </serial>
	I0205 03:30:12.962073   77242 main.go:141] libmachine: (bridge-253147)     <console type='pty'>
	I0205 03:30:12.962083   77242 main.go:141] libmachine: (bridge-253147)       <target type='serial' port='0'/>
	I0205 03:30:12.962092   77242 main.go:141] libmachine: (bridge-253147)     </console>
	I0205 03:30:12.962102   77242 main.go:141] libmachine: (bridge-253147)     <rng model='virtio'>
	I0205 03:30:12.962113   77242 main.go:141] libmachine: (bridge-253147)       <backend model='random'>/dev/random</backend>
	I0205 03:30:12.962123   77242 main.go:141] libmachine: (bridge-253147)     </rng>
	I0205 03:30:12.962133   77242 main.go:141] libmachine: (bridge-253147)     
	I0205 03:30:12.962142   77242 main.go:141] libmachine: (bridge-253147)     
	I0205 03:30:12.962150   77242 main.go:141] libmachine: (bridge-253147)   </devices>
	I0205 03:30:12.962157   77242 main.go:141] libmachine: (bridge-253147) </domain>
	I0205 03:30:12.962170   77242 main.go:141] libmachine: (bridge-253147) 
	I0205 03:30:12.969379   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:cf:7f:ba in network default
	I0205 03:30:12.970167   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:12.970200   77242 main.go:141] libmachine: (bridge-253147) starting domain...
	I0205 03:30:12.970220   77242 main.go:141] libmachine: (bridge-253147) ensuring networks are active...
	I0205 03:30:12.971065   77242 main.go:141] libmachine: (bridge-253147) Ensuring network default is active
	I0205 03:30:12.971477   77242 main.go:141] libmachine: (bridge-253147) Ensuring network mk-bridge-253147 is active
	I0205 03:30:12.972066   77242 main.go:141] libmachine: (bridge-253147) getting domain XML...
	I0205 03:30:12.972914   77242 main.go:141] libmachine: (bridge-253147) creating domain...
	I0205 03:30:14.273914   77242 main.go:141] libmachine: (bridge-253147) waiting for IP...
	I0205 03:30:14.274688   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:14.275235   77242 main.go:141] libmachine: (bridge-253147) DBG | unable to find current IP address of domain bridge-253147 in network mk-bridge-253147
	I0205 03:30:14.275342   77242 main.go:141] libmachine: (bridge-253147) DBG | I0205 03:30:14.275248   77289 retry.go:31] will retry after 305.177217ms: waiting for domain to come up
	I0205 03:30:14.581781   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:14.582455   77242 main.go:141] libmachine: (bridge-253147) DBG | unable to find current IP address of domain bridge-253147 in network mk-bridge-253147
	I0205 03:30:14.582483   77242 main.go:141] libmachine: (bridge-253147) DBG | I0205 03:30:14.582419   77289 retry.go:31] will retry after 267.088448ms: waiting for domain to come up
	I0205 03:30:14.850832   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:14.851332   77242 main.go:141] libmachine: (bridge-253147) DBG | unable to find current IP address of domain bridge-253147 in network mk-bridge-253147
	I0205 03:30:14.851369   77242 main.go:141] libmachine: (bridge-253147) DBG | I0205 03:30:14.851305   77289 retry.go:31] will retry after 408.091339ms: waiting for domain to come up
	I0205 03:30:15.261214   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:15.261815   77242 main.go:141] libmachine: (bridge-253147) DBG | unable to find current IP address of domain bridge-253147 in network mk-bridge-253147
	I0205 03:30:15.261850   77242 main.go:141] libmachine: (bridge-253147) DBG | I0205 03:30:15.261757   77289 retry.go:31] will retry after 594.941946ms: waiting for domain to come up
	I0205 03:30:15.860548   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:15.861097   77242 main.go:141] libmachine: (bridge-253147) DBG | unable to find current IP address of domain bridge-253147 in network mk-bridge-253147
	I0205 03:30:15.861275   77242 main.go:141] libmachine: (bridge-253147) DBG | I0205 03:30:15.861171   77289 retry.go:31] will retry after 628.329015ms: waiting for domain to come up
	I0205 03:30:16.491123   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:16.491724   77242 main.go:141] libmachine: (bridge-253147) DBG | unable to find current IP address of domain bridge-253147 in network mk-bridge-253147
	I0205 03:30:16.491768   77242 main.go:141] libmachine: (bridge-253147) DBG | I0205 03:30:16.491692   77289 retry.go:31] will retry after 777.442694ms: waiting for domain to come up
	I0205 03:30:13.971691   77491 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0205 03:30:13.971753   77491 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20363-12788/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0205 03:30:13.971767   77491 cache.go:56] Caching tarball of preloaded images
	I0205 03:30:13.971880   77491 preload.go:172] Found /home/jenkins/minikube-integration/20363-12788/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0205 03:30:13.971895   77491 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0205 03:30:13.972022   77491 profile.go:143] Saving config to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/config.json ...
	I0205 03:30:13.972046   77491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/config.json: {Name:mk2d8203c5bd379ff80e35aa7d483c877cb991a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:30:13.972236   77491 start.go:360] acquireMachinesLock for enable-default-cni-253147: {Name:mka859d8706e94e04a549fcebb98cdac86bfe5a9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0205 03:30:17.270468   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:17.270930   77242 main.go:141] libmachine: (bridge-253147) DBG | unable to find current IP address of domain bridge-253147 in network mk-bridge-253147
	I0205 03:30:17.270970   77242 main.go:141] libmachine: (bridge-253147) DBG | I0205 03:30:17.270899   77289 retry.go:31] will retry after 1.142243743s: waiting for domain to come up
	I0205 03:30:18.414357   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:18.414829   77242 main.go:141] libmachine: (bridge-253147) DBG | unable to find current IP address of domain bridge-253147 in network mk-bridge-253147
	I0205 03:30:18.414855   77242 main.go:141] libmachine: (bridge-253147) DBG | I0205 03:30:18.414802   77289 retry.go:31] will retry after 1.264093425s: waiting for domain to come up
	I0205 03:30:19.681132   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:19.681619   77242 main.go:141] libmachine: (bridge-253147) DBG | unable to find current IP address of domain bridge-253147 in network mk-bridge-253147
	I0205 03:30:19.681640   77242 main.go:141] libmachine: (bridge-253147) DBG | I0205 03:30:19.681609   77289 retry.go:31] will retry after 1.561141318s: waiting for domain to come up
	I0205 03:30:21.245250   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:21.245808   77242 main.go:141] libmachine: (bridge-253147) DBG | unable to find current IP address of domain bridge-253147 in network mk-bridge-253147
	I0205 03:30:21.245866   77242 main.go:141] libmachine: (bridge-253147) DBG | I0205 03:30:21.245809   77289 retry.go:31] will retry after 1.818541717s: waiting for domain to come up
	I0205 03:30:23.066293   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:23.066843   77242 main.go:141] libmachine: (bridge-253147) DBG | unable to find current IP address of domain bridge-253147 in network mk-bridge-253147
	I0205 03:30:23.066870   77242 main.go:141] libmachine: (bridge-253147) DBG | I0205 03:30:23.066808   77289 retry.go:31] will retry after 2.860967461s: waiting for domain to come up
	I0205 03:30:25.929813   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:25.930339   77242 main.go:141] libmachine: (bridge-253147) DBG | unable to find current IP address of domain bridge-253147 in network mk-bridge-253147
	I0205 03:30:25.930377   77242 main.go:141] libmachine: (bridge-253147) DBG | I0205 03:30:25.930305   77289 retry.go:31] will retry after 2.262438462s: waiting for domain to come up
	I0205 03:30:28.194336   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:28.194742   77242 main.go:141] libmachine: (bridge-253147) DBG | unable to find current IP address of domain bridge-253147 in network mk-bridge-253147
	I0205 03:30:28.194764   77242 main.go:141] libmachine: (bridge-253147) DBG | I0205 03:30:28.194725   77289 retry.go:31] will retry after 2.755818062s: waiting for domain to come up
	I0205 03:30:30.952245   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:30.952691   77242 main.go:141] libmachine: (bridge-253147) DBG | unable to find current IP address of domain bridge-253147 in network mk-bridge-253147
	I0205 03:30:30.952714   77242 main.go:141] libmachine: (bridge-253147) DBG | I0205 03:30:30.952656   77289 retry.go:31] will retry after 3.807968232s: waiting for domain to come up
	I0205 03:30:36.566403   77491 start.go:364] duration metric: took 22.594138599s to acquireMachinesLock for "enable-default-cni-253147"
	I0205 03:30:36.566456   77491 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-253147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:enable-de
fault-cni-253147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0205 03:30:36.566550   77491 start.go:125] createHost starting for "" (driver="kvm2")
	I0205 03:30:34.762374   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:34.762960   77242 main.go:141] libmachine: (bridge-253147) found domain IP: 192.168.50.246
	I0205 03:30:34.762979   77242 main.go:141] libmachine: (bridge-253147) reserving static IP address...
	I0205 03:30:34.762992   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has current primary IP address 192.168.50.246 and MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:34.763299   77242 main.go:141] libmachine: (bridge-253147) DBG | unable to find host DHCP lease matching {name: "bridge-253147", mac: "52:54:00:8f:4a:f9", ip: "192.168.50.246"} in network mk-bridge-253147
	I0205 03:30:34.838594   77242 main.go:141] libmachine: (bridge-253147) DBG | Getting to WaitForSSH function...
	I0205 03:30:34.838624   77242 main.go:141] libmachine: (bridge-253147) reserved static IP address 192.168.50.246 for domain bridge-253147
	I0205 03:30:34.838637   77242 main.go:141] libmachine: (bridge-253147) waiting for SSH...
	I0205 03:30:34.841690   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:34.842136   77242 main.go:141] libmachine: (bridge-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:4a:f9", ip: ""} in network mk-bridge-253147: {Iface:virbr1 ExpiryTime:2025-02-05 04:30:27 +0000 UTC Type:0 Mac:52:54:00:8f:4a:f9 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:minikube Clientid:01:52:54:00:8f:4a:f9}
	I0205 03:30:34.842163   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined IP address 192.168.50.246 and MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:34.842340   77242 main.go:141] libmachine: (bridge-253147) DBG | Using SSH client type: external
	I0205 03:30:34.842360   77242 main.go:141] libmachine: (bridge-253147) DBG | Using SSH private key: /home/jenkins/minikube-integration/20363-12788/.minikube/machines/bridge-253147/id_rsa (-rw-------)
	I0205 03:30:34.842399   77242 main.go:141] libmachine: (bridge-253147) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.246 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20363-12788/.minikube/machines/bridge-253147/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0205 03:30:34.842418   77242 main.go:141] libmachine: (bridge-253147) DBG | About to run SSH command:
	I0205 03:30:34.842438   77242 main.go:141] libmachine: (bridge-253147) DBG | exit 0
	I0205 03:30:34.973228   77242 main.go:141] libmachine: (bridge-253147) DBG | SSH cmd err, output: <nil>: 
	I0205 03:30:34.973525   77242 main.go:141] libmachine: (bridge-253147) KVM machine creation complete
	I0205 03:30:34.973823   77242 main.go:141] libmachine: (bridge-253147) Calling .GetConfigRaw
	I0205 03:30:34.974541   77242 main.go:141] libmachine: (bridge-253147) Calling .DriverName
	I0205 03:30:34.974708   77242 main.go:141] libmachine: (bridge-253147) Calling .DriverName
	I0205 03:30:34.974905   77242 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0205 03:30:34.974920   77242 main.go:141] libmachine: (bridge-253147) Calling .GetState
	I0205 03:30:34.976304   77242 main.go:141] libmachine: Detecting operating system of created instance...
	I0205 03:30:34.976320   77242 main.go:141] libmachine: Waiting for SSH to be available...
	I0205 03:30:34.976327   77242 main.go:141] libmachine: Getting to WaitForSSH function...
	I0205 03:30:34.976334   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHHostname
	I0205 03:30:34.979685   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:34.980296   77242 main.go:141] libmachine: (bridge-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:4a:f9", ip: ""} in network mk-bridge-253147: {Iface:virbr1 ExpiryTime:2025-02-05 04:30:27 +0000 UTC Type:0 Mac:52:54:00:8f:4a:f9 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:bridge-253147 Clientid:01:52:54:00:8f:4a:f9}
	I0205 03:30:34.980339   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined IP address 192.168.50.246 and MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:34.980477   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHPort
	I0205 03:30:34.980624   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHKeyPath
	I0205 03:30:34.980749   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHKeyPath
	I0205 03:30:34.980852   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHUsername
	I0205 03:30:34.981007   77242 main.go:141] libmachine: Using SSH client type: native
	I0205 03:30:34.981220   77242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I0205 03:30:34.981232   77242 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0205 03:30:35.096392   77242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0205 03:30:35.096414   77242 main.go:141] libmachine: Detecting the provisioner...
	I0205 03:30:35.096421   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHHostname
	I0205 03:30:35.099236   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:35.099611   77242 main.go:141] libmachine: (bridge-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:4a:f9", ip: ""} in network mk-bridge-253147: {Iface:virbr1 ExpiryTime:2025-02-05 04:30:27 +0000 UTC Type:0 Mac:52:54:00:8f:4a:f9 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:bridge-253147 Clientid:01:52:54:00:8f:4a:f9}
	I0205 03:30:35.099643   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined IP address 192.168.50.246 and MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:35.099833   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHPort
	I0205 03:30:35.100002   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHKeyPath
	I0205 03:30:35.100163   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHKeyPath
	I0205 03:30:35.100285   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHUsername
	I0205 03:30:35.100429   77242 main.go:141] libmachine: Using SSH client type: native
	I0205 03:30:35.100604   77242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I0205 03:30:35.100616   77242 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0205 03:30:35.218495   77242 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0205 03:30:35.218548   77242 main.go:141] libmachine: found compatible host: buildroot
	I0205 03:30:35.218560   77242 main.go:141] libmachine: Provisioning with buildroot...
	I0205 03:30:35.218570   77242 main.go:141] libmachine: (bridge-253147) Calling .GetMachineName
	I0205 03:30:35.218775   77242 buildroot.go:166] provisioning hostname "bridge-253147"
	I0205 03:30:35.218801   77242 main.go:141] libmachine: (bridge-253147) Calling .GetMachineName
	I0205 03:30:35.218961   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHHostname
	I0205 03:30:35.221601   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:35.221894   77242 main.go:141] libmachine: (bridge-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:4a:f9", ip: ""} in network mk-bridge-253147: {Iface:virbr1 ExpiryTime:2025-02-05 04:30:27 +0000 UTC Type:0 Mac:52:54:00:8f:4a:f9 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:bridge-253147 Clientid:01:52:54:00:8f:4a:f9}
	I0205 03:30:35.221926   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined IP address 192.168.50.246 and MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:35.222080   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHPort
	I0205 03:30:35.222254   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHKeyPath
	I0205 03:30:35.222429   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHKeyPath
	I0205 03:30:35.222566   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHUsername
	I0205 03:30:35.222709   77242 main.go:141] libmachine: Using SSH client type: native
	I0205 03:30:35.222922   77242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I0205 03:30:35.222943   77242 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-253147 && echo "bridge-253147" | sudo tee /etc/hostname
	I0205 03:30:35.359778   77242 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-253147
	
	I0205 03:30:35.359806   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHHostname
	I0205 03:30:35.362911   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:35.363387   77242 main.go:141] libmachine: (bridge-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:4a:f9", ip: ""} in network mk-bridge-253147: {Iface:virbr1 ExpiryTime:2025-02-05 04:30:27 +0000 UTC Type:0 Mac:52:54:00:8f:4a:f9 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:bridge-253147 Clientid:01:52:54:00:8f:4a:f9}
	I0205 03:30:35.363415   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined IP address 192.168.50.246 and MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:35.363627   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHPort
	I0205 03:30:35.363816   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHKeyPath
	I0205 03:30:35.363976   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHKeyPath
	I0205 03:30:35.364149   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHUsername
	I0205 03:30:35.364339   77242 main.go:141] libmachine: Using SSH client type: native
	I0205 03:30:35.364536   77242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I0205 03:30:35.364555   77242 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-253147' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-253147/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-253147' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0205 03:30:35.493785   77242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0205 03:30:35.493813   77242 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20363-12788/.minikube CaCertPath:/home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20363-12788/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20363-12788/.minikube}
	I0205 03:30:35.493842   77242 buildroot.go:174] setting up certificates
	I0205 03:30:35.493852   77242 provision.go:84] configureAuth start
	I0205 03:30:35.493860   77242 main.go:141] libmachine: (bridge-253147) Calling .GetMachineName
	I0205 03:30:35.494097   77242 main.go:141] libmachine: (bridge-253147) Calling .GetIP
	I0205 03:30:35.496551   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:35.496935   77242 main.go:141] libmachine: (bridge-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:4a:f9", ip: ""} in network mk-bridge-253147: {Iface:virbr1 ExpiryTime:2025-02-05 04:30:27 +0000 UTC Type:0 Mac:52:54:00:8f:4a:f9 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:bridge-253147 Clientid:01:52:54:00:8f:4a:f9}
	I0205 03:30:35.496967   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined IP address 192.168.50.246 and MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:35.497072   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHHostname
	I0205 03:30:35.499546   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:35.499951   77242 main.go:141] libmachine: (bridge-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:4a:f9", ip: ""} in network mk-bridge-253147: {Iface:virbr1 ExpiryTime:2025-02-05 04:30:27 +0000 UTC Type:0 Mac:52:54:00:8f:4a:f9 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:bridge-253147 Clientid:01:52:54:00:8f:4a:f9}
	I0205 03:30:35.499978   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined IP address 192.168.50.246 and MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:35.500139   77242 provision.go:143] copyHostCerts
	I0205 03:30:35.500204   77242 exec_runner.go:144] found /home/jenkins/minikube-integration/20363-12788/.minikube/ca.pem, removing ...
	I0205 03:30:35.500226   77242 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20363-12788/.minikube/ca.pem
	I0205 03:30:35.500312   77242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20363-12788/.minikube/ca.pem (1082 bytes)
	I0205 03:30:35.500409   77242 exec_runner.go:144] found /home/jenkins/minikube-integration/20363-12788/.minikube/cert.pem, removing ...
	I0205 03:30:35.500418   77242 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20363-12788/.minikube/cert.pem
	I0205 03:30:35.500445   77242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20363-12788/.minikube/cert.pem (1123 bytes)
	I0205 03:30:35.500510   77242 exec_runner.go:144] found /home/jenkins/minikube-integration/20363-12788/.minikube/key.pem, removing ...
	I0205 03:30:35.500517   77242 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20363-12788/.minikube/key.pem
	I0205 03:30:35.500538   77242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20363-12788/.minikube/key.pem (1675 bytes)
	I0205 03:30:35.500599   77242 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20363-12788/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca-key.pem org=jenkins.bridge-253147 san=[127.0.0.1 192.168.50.246 bridge-253147 localhost minikube]
	I0205 03:30:35.882545   77242 provision.go:177] copyRemoteCerts
	I0205 03:30:35.882601   77242 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0205 03:30:35.882621   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHHostname
	I0205 03:30:35.885264   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:35.885625   77242 main.go:141] libmachine: (bridge-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:4a:f9", ip: ""} in network mk-bridge-253147: {Iface:virbr1 ExpiryTime:2025-02-05 04:30:27 +0000 UTC Type:0 Mac:52:54:00:8f:4a:f9 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:bridge-253147 Clientid:01:52:54:00:8f:4a:f9}
	I0205 03:30:35.885661   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined IP address 192.168.50.246 and MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:35.885847   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHPort
	I0205 03:30:35.886014   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHKeyPath
	I0205 03:30:35.886182   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHUsername
	I0205 03:30:35.886311   77242 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/bridge-253147/id_rsa Username:docker}
	I0205 03:30:35.975581   77242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0205 03:30:36.000767   77242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0205 03:30:36.026723   77242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0205 03:30:36.057309   77242 provision.go:87] duration metric: took 563.440863ms to configureAuth
	I0205 03:30:36.057364   77242 buildroot.go:189] setting minikube options for container-runtime
	I0205 03:30:36.057565   77242 config.go:182] Loaded profile config "bridge-253147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 03:30:36.057639   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHHostname
	I0205 03:30:36.060404   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:36.060803   77242 main.go:141] libmachine: (bridge-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:4a:f9", ip: ""} in network mk-bridge-253147: {Iface:virbr1 ExpiryTime:2025-02-05 04:30:27 +0000 UTC Type:0 Mac:52:54:00:8f:4a:f9 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:bridge-253147 Clientid:01:52:54:00:8f:4a:f9}
	I0205 03:30:36.060835   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined IP address 192.168.50.246 and MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:36.061047   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHPort
	I0205 03:30:36.061260   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHKeyPath
	I0205 03:30:36.061427   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHKeyPath
	I0205 03:30:36.061575   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHUsername
	I0205 03:30:36.061769   77242 main.go:141] libmachine: Using SSH client type: native
	I0205 03:30:36.061968   77242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I0205 03:30:36.061989   77242 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0205 03:30:36.298630   77242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0205 03:30:36.298659   77242 main.go:141] libmachine: Checking connection to Docker...
	I0205 03:30:36.298669   77242 main.go:141] libmachine: (bridge-253147) Calling .GetURL
	I0205 03:30:36.299977   77242 main.go:141] libmachine: (bridge-253147) DBG | using libvirt version 6000000
	I0205 03:30:36.302353   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:36.302738   77242 main.go:141] libmachine: (bridge-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:4a:f9", ip: ""} in network mk-bridge-253147: {Iface:virbr1 ExpiryTime:2025-02-05 04:30:27 +0000 UTC Type:0 Mac:52:54:00:8f:4a:f9 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:bridge-253147 Clientid:01:52:54:00:8f:4a:f9}
	I0205 03:30:36.302779   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined IP address 192.168.50.246 and MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:36.302961   77242 main.go:141] libmachine: Docker is up and running!
	I0205 03:30:36.302971   77242 main.go:141] libmachine: Reticulating splines...
	I0205 03:30:36.302977   77242 client.go:171] duration metric: took 24.494043678s to LocalClient.Create
	I0205 03:30:36.302997   77242 start.go:167] duration metric: took 24.494106892s to libmachine.API.Create "bridge-253147"
	I0205 03:30:36.303007   77242 start.go:293] postStartSetup for "bridge-253147" (driver="kvm2")
	I0205 03:30:36.303015   77242 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0205 03:30:36.303031   77242 main.go:141] libmachine: (bridge-253147) Calling .DriverName
	I0205 03:30:36.303251   77242 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0205 03:30:36.303284   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHHostname
	I0205 03:30:36.305501   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:36.305932   77242 main.go:141] libmachine: (bridge-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:4a:f9", ip: ""} in network mk-bridge-253147: {Iface:virbr1 ExpiryTime:2025-02-05 04:30:27 +0000 UTC Type:0 Mac:52:54:00:8f:4a:f9 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:bridge-253147 Clientid:01:52:54:00:8f:4a:f9}
	I0205 03:30:36.305957   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined IP address 192.168.50.246 and MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:36.306294   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHPort
	I0205 03:30:36.306472   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHKeyPath
	I0205 03:30:36.306619   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHUsername
	I0205 03:30:36.306722   77242 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/bridge-253147/id_rsa Username:docker}
	I0205 03:30:36.403469   77242 ssh_runner.go:195] Run: cat /etc/os-release
	I0205 03:30:36.407367   77242 info.go:137] Remote host: Buildroot 2023.02.9
	I0205 03:30:36.407383   77242 filesync.go:126] Scanning /home/jenkins/minikube-integration/20363-12788/.minikube/addons for local assets ...
	I0205 03:30:36.407435   77242 filesync.go:126] Scanning /home/jenkins/minikube-integration/20363-12788/.minikube/files for local assets ...
	I0205 03:30:36.407512   77242 filesync.go:149] local asset: /home/jenkins/minikube-integration/20363-12788/.minikube/files/etc/ssl/certs/199892.pem -> 199892.pem in /etc/ssl/certs
	I0205 03:30:36.407604   77242 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0205 03:30:36.416957   77242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/files/etc/ssl/certs/199892.pem --> /etc/ssl/certs/199892.pem (1708 bytes)
	I0205 03:30:36.441130   77242 start.go:296] duration metric: took 138.106155ms for postStartSetup
	I0205 03:30:36.441177   77242 main.go:141] libmachine: (bridge-253147) Calling .GetConfigRaw
	I0205 03:30:36.441741   77242 main.go:141] libmachine: (bridge-253147) Calling .GetIP
	I0205 03:30:36.444247   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:36.444572   77242 main.go:141] libmachine: (bridge-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:4a:f9", ip: ""} in network mk-bridge-253147: {Iface:virbr1 ExpiryTime:2025-02-05 04:30:27 +0000 UTC Type:0 Mac:52:54:00:8f:4a:f9 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:bridge-253147 Clientid:01:52:54:00:8f:4a:f9}
	I0205 03:30:36.444603   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined IP address 192.168.50.246 and MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:36.444823   77242 profile.go:143] Saving config to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/config.json ...
	I0205 03:30:36.445036   77242 start.go:128] duration metric: took 24.654147667s to createHost
	I0205 03:30:36.445058   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHHostname
	I0205 03:30:36.447141   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:36.447406   77242 main.go:141] libmachine: (bridge-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:4a:f9", ip: ""} in network mk-bridge-253147: {Iface:virbr1 ExpiryTime:2025-02-05 04:30:27 +0000 UTC Type:0 Mac:52:54:00:8f:4a:f9 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:bridge-253147 Clientid:01:52:54:00:8f:4a:f9}
	I0205 03:30:36.447434   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined IP address 192.168.50.246 and MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:36.447526   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHPort
	I0205 03:30:36.447690   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHKeyPath
	I0205 03:30:36.447849   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHKeyPath
	I0205 03:30:36.447996   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHUsername
	I0205 03:30:36.448157   77242 main.go:141] libmachine: Using SSH client type: native
	I0205 03:30:36.448326   77242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I0205 03:30:36.448337   77242 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0205 03:30:36.566239   77242 main.go:141] libmachine: SSH cmd err, output: <nil>: 1738726236.524736391
	
	I0205 03:30:36.566263   77242 fix.go:216] guest clock: 1738726236.524736391
	I0205 03:30:36.566276   77242 fix.go:229] Guest: 2025-02-05 03:30:36.524736391 +0000 UTC Remote: 2025-02-05 03:30:36.445048492 +0000 UTC m=+24.767683288 (delta=79.687899ms)
	I0205 03:30:36.566299   77242 fix.go:200] guest clock delta is within tolerance: 79.687899ms
	I0205 03:30:36.566306   77242 start.go:83] releasing machines lock for "bridge-253147", held for 24.775483528s
	I0205 03:30:36.566341   77242 main.go:141] libmachine: (bridge-253147) Calling .DriverName
	I0205 03:30:36.566706   77242 main.go:141] libmachine: (bridge-253147) Calling .GetIP
	I0205 03:30:36.570113   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:36.570549   77242 main.go:141] libmachine: (bridge-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:4a:f9", ip: ""} in network mk-bridge-253147: {Iface:virbr1 ExpiryTime:2025-02-05 04:30:27 +0000 UTC Type:0 Mac:52:54:00:8f:4a:f9 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:bridge-253147 Clientid:01:52:54:00:8f:4a:f9}
	I0205 03:30:36.570577   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined IP address 192.168.50.246 and MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:36.570758   77242 main.go:141] libmachine: (bridge-253147) Calling .DriverName
	I0205 03:30:36.571264   77242 main.go:141] libmachine: (bridge-253147) Calling .DriverName
	I0205 03:30:36.571437   77242 main.go:141] libmachine: (bridge-253147) Calling .DriverName
	I0205 03:30:36.571568   77242 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0205 03:30:36.571625   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHHostname
	I0205 03:30:36.571732   77242 ssh_runner.go:195] Run: cat /version.json
	I0205 03:30:36.571757   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHHostname
	I0205 03:30:36.574435   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:36.574774   77242 main.go:141] libmachine: (bridge-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:4a:f9", ip: ""} in network mk-bridge-253147: {Iface:virbr1 ExpiryTime:2025-02-05 04:30:27 +0000 UTC Type:0 Mac:52:54:00:8f:4a:f9 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:bridge-253147 Clientid:01:52:54:00:8f:4a:f9}
	I0205 03:30:36.574794   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined IP address 192.168.50.246 and MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:36.574815   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:36.575013   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHPort
	I0205 03:30:36.575172   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHKeyPath
	I0205 03:30:36.575202   77242 main.go:141] libmachine: (bridge-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:4a:f9", ip: ""} in network mk-bridge-253147: {Iface:virbr1 ExpiryTime:2025-02-05 04:30:27 +0000 UTC Type:0 Mac:52:54:00:8f:4a:f9 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:bridge-253147 Clientid:01:52:54:00:8f:4a:f9}
	I0205 03:30:36.575224   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined IP address 192.168.50.246 and MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:36.575423   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHUsername
	I0205 03:30:36.575485   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHPort
	I0205 03:30:36.575574   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHKeyPath
	I0205 03:30:36.575594   77242 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/bridge-253147/id_rsa Username:docker}
	I0205 03:30:36.575707   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHUsername
	I0205 03:30:36.575863   77242 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/bridge-253147/id_rsa Username:docker}
	I0205 03:30:36.666606   77242 ssh_runner.go:195] Run: systemctl --version
	I0205 03:30:36.689181   77242 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0205 03:30:36.844523   77242 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0205 03:30:36.851067   77242 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0205 03:30:36.851145   77242 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0205 03:30:36.877967   77242 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0205 03:30:36.878000   77242 start.go:495] detecting cgroup driver to use...
	I0205 03:30:36.878076   77242 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0205 03:30:36.902472   77242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0205 03:30:36.919323   77242 docker.go:217] disabling cri-docker service (if available) ...
	I0205 03:30:36.919375   77242 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0205 03:30:36.935680   77242 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0205 03:30:36.952117   77242 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0205 03:30:37.090962   77242 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0205 03:30:37.260333   77242 docker.go:233] disabling docker service ...
	I0205 03:30:37.260399   77242 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0205 03:30:37.274613   77242 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0205 03:30:37.287948   77242 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0205 03:30:37.434874   77242 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0205 03:30:37.547055   77242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0205 03:30:37.561538   77242 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0205 03:30:37.580522   77242 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0205 03:30:37.580577   77242 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:30:37.591002   77242 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0205 03:30:37.591078   77242 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:30:37.601654   77242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:30:37.612609   77242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:30:37.629512   77242 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0205 03:30:37.639950   77242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:30:37.650310   77242 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:30:37.666925   77242 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:30:37.677358   77242 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0205 03:30:37.686660   77242 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0205 03:30:37.686720   77242 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0205 03:30:37.700081   77242 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0205 03:30:37.709751   77242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0205 03:30:37.818587   77242 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0205 03:30:37.910436   77242 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0205 03:30:37.910512   77242 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0205 03:30:37.915133   77242 start.go:563] Will wait 60s for crictl version
	I0205 03:30:37.915196   77242 ssh_runner.go:195] Run: which crictl
	I0205 03:30:37.918892   77242 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0205 03:30:37.960248   77242 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0205 03:30:37.960337   77242 ssh_runner.go:195] Run: crio --version
	I0205 03:30:37.988457   77242 ssh_runner.go:195] Run: crio --version
	I0205 03:30:38.019169   77242 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0205 03:30:36.568227   77491 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0205 03:30:36.568439   77491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:30:36.568499   77491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:30:36.586399   77491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43681
	I0205 03:30:36.586865   77491 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:30:36.587473   77491 main.go:141] libmachine: Using API Version  1
	I0205 03:30:36.587506   77491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:30:36.587862   77491 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:30:36.588080   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetMachineName
	I0205 03:30:36.588278   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .DriverName
	I0205 03:30:36.588507   77491 start.go:159] libmachine.API.Create for "enable-default-cni-253147" (driver="kvm2")
	I0205 03:30:36.588537   77491 client.go:168] LocalClient.Create starting
	I0205 03:30:36.588571   77491 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem
	I0205 03:30:36.588609   77491 main.go:141] libmachine: Decoding PEM data...
	I0205 03:30:36.588632   77491 main.go:141] libmachine: Parsing certificate...
	I0205 03:30:36.588699   77491 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20363-12788/.minikube/certs/cert.pem
	I0205 03:30:36.588725   77491 main.go:141] libmachine: Decoding PEM data...
	I0205 03:30:36.588743   77491 main.go:141] libmachine: Parsing certificate...
	I0205 03:30:36.588764   77491 main.go:141] libmachine: Running pre-create checks...
	I0205 03:30:36.588779   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .PreCreateCheck
	I0205 03:30:36.589248   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetConfigRaw
	I0205 03:30:36.589780   77491 main.go:141] libmachine: Creating machine...
	I0205 03:30:36.589798   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .Create
	I0205 03:30:36.589992   77491 main.go:141] libmachine: (enable-default-cni-253147) creating KVM machine...
	I0205 03:30:36.590011   77491 main.go:141] libmachine: (enable-default-cni-253147) creating network...
	I0205 03:30:36.596901   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | found existing default KVM network
	I0205 03:30:36.598770   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | I0205 03:30:36.598586   79034 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:e6:1a:05} reservation:<nil>}
	I0205 03:30:36.599989   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | I0205 03:30:36.599894   79034 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:13:c2:cc} reservation:<nil>}
	I0205 03:30:36.600768   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | I0205 03:30:36.600689   79034 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:30:9e:5e} reservation:<nil>}
	I0205 03:30:36.601805   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | I0205 03:30:36.601724   79034 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003d05d0}
	I0205 03:30:36.601858   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | created network xml: 
	I0205 03:30:36.601878   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | <network>
	I0205 03:30:36.601886   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG |   <name>mk-enable-default-cni-253147</name>
	I0205 03:30:36.601892   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG |   <dns enable='no'/>
	I0205 03:30:36.601898   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG |   
	I0205 03:30:36.601907   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0205 03:30:36.601913   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG |     <dhcp>
	I0205 03:30:36.601918   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0205 03:30:36.601923   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG |     </dhcp>
	I0205 03:30:36.601927   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG |   </ip>
	I0205 03:30:36.601951   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG |   
	I0205 03:30:36.601978   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | </network>
	I0205 03:30:36.601993   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | 
	I0205 03:30:36.606932   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | trying to create private KVM network mk-enable-default-cni-253147 192.168.72.0/24...
	I0205 03:30:36.685628   77491 main.go:141] libmachine: (enable-default-cni-253147) setting up store path in /home/jenkins/minikube-integration/20363-12788/.minikube/machines/enable-default-cni-253147 ...
	I0205 03:30:36.685665   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | private KVM network mk-enable-default-cni-253147 192.168.72.0/24 created
	I0205 03:30:36.685677   77491 main.go:141] libmachine: (enable-default-cni-253147) building disk image from file:///home/jenkins/minikube-integration/20363-12788/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0205 03:30:36.685723   77491 main.go:141] libmachine: (enable-default-cni-253147) Downloading /home/jenkins/minikube-integration/20363-12788/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20363-12788/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0205 03:30:36.685743   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | I0205 03:30:36.685534   79034 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20363-12788/.minikube
	I0205 03:30:36.962955   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | I0205 03:30:36.962841   79034 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20363-12788/.minikube/machines/enable-default-cni-253147/id_rsa...
	I0205 03:30:37.048897   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | I0205 03:30:37.048737   79034 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20363-12788/.minikube/machines/enable-default-cni-253147/enable-default-cni-253147.rawdisk...
	I0205 03:30:37.048942   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | Writing magic tar header
	I0205 03:30:37.048992   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | Writing SSH key tar header
	I0205 03:30:37.049025   77491 main.go:141] libmachine: (enable-default-cni-253147) setting executable bit set on /home/jenkins/minikube-integration/20363-12788/.minikube/machines/enable-default-cni-253147 (perms=drwx------)
	I0205 03:30:37.049045   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | I0205 03:30:37.048849   79034 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20363-12788/.minikube/machines/enable-default-cni-253147 ...
	I0205 03:30:37.049076   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20363-12788/.minikube/machines/enable-default-cni-253147
	I0205 03:30:37.049090   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20363-12788/.minikube/machines
	I0205 03:30:37.049103   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20363-12788/.minikube
	I0205 03:30:37.049131   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20363-12788
	I0205 03:30:37.049146   77491 main.go:141] libmachine: (enable-default-cni-253147) setting executable bit set on /home/jenkins/minikube-integration/20363-12788/.minikube/machines (perms=drwxr-xr-x)
	I0205 03:30:37.049162   77491 main.go:141] libmachine: (enable-default-cni-253147) setting executable bit set on /home/jenkins/minikube-integration/20363-12788/.minikube (perms=drwxr-xr-x)
	I0205 03:30:37.049176   77491 main.go:141] libmachine: (enable-default-cni-253147) setting executable bit set on /home/jenkins/minikube-integration/20363-12788 (perms=drwxrwxr-x)
	I0205 03:30:37.049187   77491 main.go:141] libmachine: (enable-default-cni-253147) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0205 03:30:37.049201   77491 main.go:141] libmachine: (enable-default-cni-253147) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0205 03:30:37.049214   77491 main.go:141] libmachine: (enable-default-cni-253147) creating domain...
	I0205 03:30:37.049224   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0205 03:30:37.049235   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | checking permissions on dir: /home/jenkins
	I0205 03:30:37.049243   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | checking permissions on dir: /home
	I0205 03:30:37.049253   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | skipping /home - not owner
	I0205 03:30:37.050287   77491 main.go:141] libmachine: (enable-default-cni-253147) define libvirt domain using xml: 
	I0205 03:30:37.050315   77491 main.go:141] libmachine: (enable-default-cni-253147) <domain type='kvm'>
	I0205 03:30:37.050330   77491 main.go:141] libmachine: (enable-default-cni-253147)   <name>enable-default-cni-253147</name>
	I0205 03:30:37.050350   77491 main.go:141] libmachine: (enable-default-cni-253147)   <memory unit='MiB'>3072</memory>
	I0205 03:30:37.050360   77491 main.go:141] libmachine: (enable-default-cni-253147)   <vcpu>2</vcpu>
	I0205 03:30:37.050394   77491 main.go:141] libmachine: (enable-default-cni-253147)   <features>
	I0205 03:30:37.050406   77491 main.go:141] libmachine: (enable-default-cni-253147)     <acpi/>
	I0205 03:30:37.050413   77491 main.go:141] libmachine: (enable-default-cni-253147)     <apic/>
	I0205 03:30:37.050425   77491 main.go:141] libmachine: (enable-default-cni-253147)     <pae/>
	I0205 03:30:37.050432   77491 main.go:141] libmachine: (enable-default-cni-253147)     
	I0205 03:30:37.050446   77491 main.go:141] libmachine: (enable-default-cni-253147)   </features>
	I0205 03:30:37.050454   77491 main.go:141] libmachine: (enable-default-cni-253147)   <cpu mode='host-passthrough'>
	I0205 03:30:37.050466   77491 main.go:141] libmachine: (enable-default-cni-253147)   
	I0205 03:30:37.050473   77491 main.go:141] libmachine: (enable-default-cni-253147)   </cpu>
	I0205 03:30:37.050500   77491 main.go:141] libmachine: (enable-default-cni-253147)   <os>
	I0205 03:30:37.050523   77491 main.go:141] libmachine: (enable-default-cni-253147)     <type>hvm</type>
	I0205 03:30:37.050536   77491 main.go:141] libmachine: (enable-default-cni-253147)     <boot dev='cdrom'/>
	I0205 03:30:37.050550   77491 main.go:141] libmachine: (enable-default-cni-253147)     <boot dev='hd'/>
	I0205 03:30:37.050563   77491 main.go:141] libmachine: (enable-default-cni-253147)     <bootmenu enable='no'/>
	I0205 03:30:37.050570   77491 main.go:141] libmachine: (enable-default-cni-253147)   </os>
	I0205 03:30:37.050580   77491 main.go:141] libmachine: (enable-default-cni-253147)   <devices>
	I0205 03:30:37.050601   77491 main.go:141] libmachine: (enable-default-cni-253147)     <disk type='file' device='cdrom'>
	I0205 03:30:37.050616   77491 main.go:141] libmachine: (enable-default-cni-253147)       <source file='/home/jenkins/minikube-integration/20363-12788/.minikube/machines/enable-default-cni-253147/boot2docker.iso'/>
	I0205 03:30:37.050630   77491 main.go:141] libmachine: (enable-default-cni-253147)       <target dev='hdc' bus='scsi'/>
	I0205 03:30:37.050640   77491 main.go:141] libmachine: (enable-default-cni-253147)       <readonly/>
	I0205 03:30:37.050650   77491 main.go:141] libmachine: (enable-default-cni-253147)     </disk>
	I0205 03:30:37.050667   77491 main.go:141] libmachine: (enable-default-cni-253147)     <disk type='file' device='disk'>
	I0205 03:30:37.050701   77491 main.go:141] libmachine: (enable-default-cni-253147)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0205 03:30:37.050722   77491 main.go:141] libmachine: (enable-default-cni-253147)       <source file='/home/jenkins/minikube-integration/20363-12788/.minikube/machines/enable-default-cni-253147/enable-default-cni-253147.rawdisk'/>
	I0205 03:30:37.050735   77491 main.go:141] libmachine: (enable-default-cni-253147)       <target dev='hda' bus='virtio'/>
	I0205 03:30:37.050742   77491 main.go:141] libmachine: (enable-default-cni-253147)     </disk>
	I0205 03:30:37.050754   77491 main.go:141] libmachine: (enable-default-cni-253147)     <interface type='network'>
	I0205 03:30:37.050766   77491 main.go:141] libmachine: (enable-default-cni-253147)       <source network='mk-enable-default-cni-253147'/>
	I0205 03:30:37.050778   77491 main.go:141] libmachine: (enable-default-cni-253147)       <model type='virtio'/>
	I0205 03:30:37.050785   77491 main.go:141] libmachine: (enable-default-cni-253147)     </interface>
	I0205 03:30:37.050820   77491 main.go:141] libmachine: (enable-default-cni-253147)     <interface type='network'>
	I0205 03:30:37.050846   77491 main.go:141] libmachine: (enable-default-cni-253147)       <source network='default'/>
	I0205 03:30:37.050859   77491 main.go:141] libmachine: (enable-default-cni-253147)       <model type='virtio'/>
	I0205 03:30:37.050872   77491 main.go:141] libmachine: (enable-default-cni-253147)     </interface>
	I0205 03:30:37.050886   77491 main.go:141] libmachine: (enable-default-cni-253147)     <serial type='pty'>
	I0205 03:30:37.050896   77491 main.go:141] libmachine: (enable-default-cni-253147)       <target port='0'/>
	I0205 03:30:37.050910   77491 main.go:141] libmachine: (enable-default-cni-253147)     </serial>
	I0205 03:30:37.050935   77491 main.go:141] libmachine: (enable-default-cni-253147)     <console type='pty'>
	I0205 03:30:37.050954   77491 main.go:141] libmachine: (enable-default-cni-253147)       <target type='serial' port='0'/>
	I0205 03:30:37.050968   77491 main.go:141] libmachine: (enable-default-cni-253147)     </console>
	I0205 03:30:37.050981   77491 main.go:141] libmachine: (enable-default-cni-253147)     <rng model='virtio'>
	I0205 03:30:37.050996   77491 main.go:141] libmachine: (enable-default-cni-253147)       <backend model='random'>/dev/random</backend>
	I0205 03:30:37.051013   77491 main.go:141] libmachine: (enable-default-cni-253147)     </rng>
	I0205 03:30:37.051025   77491 main.go:141] libmachine: (enable-default-cni-253147)     
	I0205 03:30:37.051033   77491 main.go:141] libmachine: (enable-default-cni-253147)     
	I0205 03:30:37.051058   77491 main.go:141] libmachine: (enable-default-cni-253147)   </devices>
	I0205 03:30:37.051069   77491 main.go:141] libmachine: (enable-default-cni-253147) </domain>
	I0205 03:30:37.051084   77491 main.go:141] libmachine: (enable-default-cni-253147) 
	I0205 03:30:37.057819   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:ca:ee:a6 in network default
	I0205 03:30:37.058459   77491 main.go:141] libmachine: (enable-default-cni-253147) starting domain...
	I0205 03:30:37.058492   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:30:37.058503   77491 main.go:141] libmachine: (enable-default-cni-253147) ensuring networks are active...
	I0205 03:30:37.059172   77491 main.go:141] libmachine: (enable-default-cni-253147) Ensuring network default is active
	I0205 03:30:37.059502   77491 main.go:141] libmachine: (enable-default-cni-253147) Ensuring network mk-enable-default-cni-253147 is active
	I0205 03:30:37.060055   77491 main.go:141] libmachine: (enable-default-cni-253147) getting domain XML...
	I0205 03:30:37.060913   77491 main.go:141] libmachine: (enable-default-cni-253147) creating domain...
	I0205 03:30:38.021680   77242 main.go:141] libmachine: (bridge-253147) Calling .GetIP
	I0205 03:30:38.024681   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:38.025408   77242 main.go:141] libmachine: (bridge-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:4a:f9", ip: ""} in network mk-bridge-253147: {Iface:virbr1 ExpiryTime:2025-02-05 04:30:27 +0000 UTC Type:0 Mac:52:54:00:8f:4a:f9 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:bridge-253147 Clientid:01:52:54:00:8f:4a:f9}
	I0205 03:30:38.025438   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined IP address 192.168.50.246 and MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:38.025657   77242 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0205 03:30:38.030062   77242 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0205 03:30:38.042534   77242 kubeadm.go:883] updating cluster {Name:bridge-253147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-253147 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.50.246 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0205 03:30:38.042666   77242 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0205 03:30:38.042721   77242 ssh_runner.go:195] Run: sudo crictl images --output json
	I0205 03:30:38.073875   77242 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0205 03:30:38.073941   77242 ssh_runner.go:195] Run: which lz4
	I0205 03:30:38.077754   77242 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0205 03:30:38.081778   77242 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0205 03:30:38.081812   77242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0205 03:30:39.456367   77242 crio.go:462] duration metric: took 1.378647417s to copy over tarball
	I0205 03:30:39.456478   77242 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0205 03:30:41.702914   77242 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.246400913s)
	I0205 03:30:41.702942   77242 crio.go:469] duration metric: took 2.246548889s to extract the tarball
	I0205 03:30:41.702949   77242 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0205 03:30:38.623277   77491 main.go:141] libmachine: (enable-default-cni-253147) waiting for IP...
	I0205 03:30:38.624244   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:30:38.624745   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | unable to find current IP address of domain enable-default-cni-253147 in network mk-enable-default-cni-253147
	I0205 03:30:38.624808   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | I0205 03:30:38.624736   79034 retry.go:31] will retry after 225.225942ms: waiting for domain to come up
	I0205 03:30:38.851117   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:30:38.851700   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | unable to find current IP address of domain enable-default-cni-253147 in network mk-enable-default-cni-253147
	I0205 03:30:38.851736   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | I0205 03:30:38.851663   79034 retry.go:31] will retry after 298.69382ms: waiting for domain to come up
	I0205 03:30:39.152119   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:30:39.152754   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | unable to find current IP address of domain enable-default-cni-253147 in network mk-enable-default-cni-253147
	I0205 03:30:39.152784   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | I0205 03:30:39.152732   79034 retry.go:31] will retry after 386.740633ms: waiting for domain to come up
	I0205 03:30:39.541393   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:30:39.542023   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | unable to find current IP address of domain enable-default-cni-253147 in network mk-enable-default-cni-253147
	I0205 03:30:39.542053   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | I0205 03:30:39.541971   79034 retry.go:31] will retry after 608.707393ms: waiting for domain to come up
	I0205 03:30:40.152792   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:30:40.153372   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | unable to find current IP address of domain enable-default-cni-253147 in network mk-enable-default-cni-253147
	I0205 03:30:40.153416   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | I0205 03:30:40.153305   79034 retry.go:31] will retry after 759.53705ms: waiting for domain to come up
	I0205 03:30:40.914923   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:30:40.915442   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | unable to find current IP address of domain enable-default-cni-253147 in network mk-enable-default-cni-253147
	I0205 03:30:40.915482   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | I0205 03:30:40.915365   79034 retry.go:31] will retry after 831.206233ms: waiting for domain to come up
	I0205 03:30:41.747692   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:30:41.748289   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | unable to find current IP address of domain enable-default-cni-253147 in network mk-enable-default-cni-253147
	I0205 03:30:41.748312   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | I0205 03:30:41.748252   79034 retry.go:31] will retry after 976.271323ms: waiting for domain to come up
	I0205 03:30:42.725992   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:30:42.726511   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | unable to find current IP address of domain enable-default-cni-253147 in network mk-enable-default-cni-253147
	I0205 03:30:42.726541   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | I0205 03:30:42.726474   79034 retry.go:31] will retry after 1.384186891s: waiting for domain to come up
	I0205 03:30:41.742178   77242 ssh_runner.go:195] Run: sudo crictl images --output json
	I0205 03:30:41.783096   77242 crio.go:514] all images are preloaded for cri-o runtime.
	I0205 03:30:41.783121   77242 cache_images.go:84] Images are preloaded, skipping loading
	I0205 03:30:41.783129   77242 kubeadm.go:934] updating node { 192.168.50.246 8443 v1.32.1 crio true true} ...
	I0205 03:30:41.783238   77242 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-253147 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.246
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:bridge-253147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0205 03:30:41.783322   77242 ssh_runner.go:195] Run: crio config
	I0205 03:30:41.827102   77242 cni.go:84] Creating CNI manager for "bridge"
	I0205 03:30:41.827126   77242 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0205 03:30:41.827149   77242 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.246 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-253147 NodeName:bridge-253147 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.246"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.246 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0205 03:30:41.827274   77242 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.246
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-253147"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.246"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.246"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0205 03:30:41.827330   77242 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0205 03:30:41.838886   77242 binaries.go:44] Found k8s binaries, skipping transfer
	I0205 03:30:41.838962   77242 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0205 03:30:41.849628   77242 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0205 03:30:41.865611   77242 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0205 03:30:41.881881   77242 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I0205 03:30:41.899035   77242 ssh_runner.go:195] Run: grep 192.168.50.246	control-plane.minikube.internal$ /etc/hosts
	I0205 03:30:41.903180   77242 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.246	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0205 03:30:41.915482   77242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0205 03:30:42.037310   77242 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0205 03:30:42.054641   77242 certs.go:68] Setting up /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147 for IP: 192.168.50.246
	I0205 03:30:42.054670   77242 certs.go:194] generating shared ca certs ...
	I0205 03:30:42.054687   77242 certs.go:226] acquiring lock for ca certs: {Name:mkca3cbb562bd1b517883c3b07fce8acc6cc9038 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:30:42.054872   77242 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20363-12788/.minikube/ca.key
	I0205 03:30:42.054937   77242 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20363-12788/.minikube/proxy-client-ca.key
	I0205 03:30:42.054951   77242 certs.go:256] generating profile certs ...
	I0205 03:30:42.055020   77242 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/client.key
	I0205 03:30:42.055037   77242 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/client.crt with IP's: []
	I0205 03:30:42.569882   77242 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/client.crt ...
	I0205 03:30:42.569913   77242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/client.crt: {Name:mk9a07762772c282594ff48594c243d2d9334ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:30:42.570097   77242 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/client.key ...
	I0205 03:30:42.570118   77242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/client.key: {Name:mk87a68ecf8140f29e5563ad400fddaa65c48f0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:30:42.570236   77242 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/apiserver.key.dc1c4caa
	I0205 03:30:42.570253   77242 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/apiserver.crt.dc1c4caa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.246]
	I0205 03:30:42.774168   77242 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/apiserver.crt.dc1c4caa ...
	I0205 03:30:42.774204   77242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/apiserver.crt.dc1c4caa: {Name:mk67a191c1ba6ea30d49291e3357f57aedb3b4d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:30:42.774371   77242 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/apiserver.key.dc1c4caa ...
	I0205 03:30:42.774382   77242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/apiserver.key.dc1c4caa: {Name:mk2bd494cab1fae00acfa6c66a4fba8665b6a2f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:30:42.774453   77242 certs.go:381] copying /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/apiserver.crt.dc1c4caa -> /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/apiserver.crt
	I0205 03:30:42.774521   77242 certs.go:385] copying /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/apiserver.key.dc1c4caa -> /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/apiserver.key
	I0205 03:30:42.774572   77242 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/proxy-client.key
	I0205 03:30:42.774587   77242 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/proxy-client.crt with IP's: []
	I0205 03:30:42.937538   77242 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/proxy-client.crt ...
	I0205 03:30:42.937565   77242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/proxy-client.crt: {Name:mk1f0ff274bc255dae590ed4bd030fbfba893f8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:30:42.937751   77242 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/proxy-client.key ...
	I0205 03:30:42.937766   77242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/proxy-client.key: {Name:mk9f536bd3f2c933036a6bc72e71b0bba8b96640 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:30:42.937961   77242 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/19989.pem (1338 bytes)
	W0205 03:30:42.937997   77242 certs.go:480] ignoring /home/jenkins/minikube-integration/20363-12788/.minikube/certs/19989_empty.pem, impossibly tiny 0 bytes
	I0205 03:30:42.938006   77242 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca-key.pem (1679 bytes)
	I0205 03:30:42.938028   77242 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem (1082 bytes)
	I0205 03:30:42.938051   77242 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/cert.pem (1123 bytes)
	I0205 03:30:42.938072   77242 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/key.pem (1675 bytes)
	I0205 03:30:42.938108   77242 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/files/etc/ssl/certs/199892.pem (1708 bytes)
	I0205 03:30:42.938629   77242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0205 03:30:42.972473   77242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0205 03:30:42.998804   77242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0205 03:30:43.025966   77242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0205 03:30:43.050717   77242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0205 03:30:43.075229   77242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0205 03:30:43.103011   77242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0205 03:30:43.127446   77242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0205 03:30:43.150497   77242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0205 03:30:43.173604   77242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/certs/19989.pem --> /usr/share/ca-certificates/19989.pem (1338 bytes)
	I0205 03:30:43.195689   77242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/files/etc/ssl/certs/199892.pem --> /usr/share/ca-certificates/199892.pem (1708 bytes)
	I0205 03:30:43.218757   77242 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0205 03:30:43.237177   77242 ssh_runner.go:195] Run: openssl version
	I0205 03:30:43.242897   77242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/199892.pem && ln -fs /usr/share/ca-certificates/199892.pem /etc/ssl/certs/199892.pem"
	I0205 03:30:43.254768   77242 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/199892.pem
	I0205 03:30:43.259295   77242 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb  5 02:11 /usr/share/ca-certificates/199892.pem
	I0205 03:30:43.259333   77242 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/199892.pem
	I0205 03:30:43.265475   77242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/199892.pem /etc/ssl/certs/3ec20f2e.0"
	I0205 03:30:43.276137   77242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0205 03:30:43.286900   77242 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0205 03:30:43.291399   77242 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb  5 02:04 /usr/share/ca-certificates/minikubeCA.pem
	I0205 03:30:43.291460   77242 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0205 03:30:43.297115   77242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0205 03:30:43.309405   77242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19989.pem && ln -fs /usr/share/ca-certificates/19989.pem /etc/ssl/certs/19989.pem"
	I0205 03:30:43.319635   77242 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19989.pem
	I0205 03:30:43.323886   77242 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb  5 02:11 /usr/share/ca-certificates/19989.pem
	I0205 03:30:43.323936   77242 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19989.pem
	I0205 03:30:43.329782   77242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/19989.pem /etc/ssl/certs/51391683.0"
	I0205 03:30:43.340548   77242 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0205 03:30:43.344485   77242 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0205 03:30:43.344534   77242 kubeadm.go:392] StartCluster: {Name:bridge-253147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-253147 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.50.246 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0205 03:30:43.344602   77242 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0205 03:30:43.344657   77242 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0205 03:30:43.381149   77242 cri.go:89] found id: ""
	I0205 03:30:43.381213   77242 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0205 03:30:43.391256   77242 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0205 03:30:43.400986   77242 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0205 03:30:43.410378   77242 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0205 03:30:43.410396   77242 kubeadm.go:157] found existing configuration files:
	
	I0205 03:30:43.410431   77242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0205 03:30:43.420609   77242 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0205 03:30:43.420667   77242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0205 03:30:43.430573   77242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0205 03:30:43.440182   77242 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0205 03:30:43.440244   77242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0205 03:30:43.449936   77242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0205 03:30:43.459622   77242 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0205 03:30:43.459680   77242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0205 03:30:43.469604   77242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0205 03:30:43.478500   77242 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0205 03:30:43.478562   77242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0205 03:30:43.487477   77242 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0205 03:30:43.546419   77242 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0205 03:30:43.546512   77242 kubeadm.go:310] [preflight] Running pre-flight checks
	I0205 03:30:43.649107   77242 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0205 03:30:43.649286   77242 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0205 03:30:43.649477   77242 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0205 03:30:43.658297   77242 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0205 03:30:43.787428   77242 out.go:235]   - Generating certificates and keys ...
	I0205 03:30:43.787557   77242 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0205 03:30:43.787670   77242 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0205 03:30:43.842289   77242 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0205 03:30:43.993164   77242 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0205 03:30:44.119322   77242 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0205 03:30:44.302079   77242 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0205 03:30:44.425710   77242 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0205 03:30:44.425881   77242 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [bridge-253147 localhost] and IPs [192.168.50.246 127.0.0.1 ::1]
	I0205 03:30:44.571842   77242 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0205 03:30:44.572049   77242 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [bridge-253147 localhost] and IPs [192.168.50.246 127.0.0.1 ::1]
	I0205 03:30:44.694047   77242 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0205 03:30:44.746113   77242 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0205 03:30:44.857769   77242 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0205 03:30:44.857851   77242 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0205 03:30:45.173228   77242 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0205 03:30:45.327637   77242 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0205 03:30:45.561572   77242 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0205 03:30:45.829713   77242 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0205 03:30:45.971023   77242 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0205 03:30:45.971651   77242 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0205 03:30:45.974124   77242 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0205 03:30:45.976752   77242 out.go:235]   - Booting up control plane ...
	I0205 03:30:45.976870   77242 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0205 03:30:45.976949   77242 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0205 03:30:45.977022   77242 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0205 03:30:45.997190   77242 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0205 03:30:46.005270   77242 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0205 03:30:46.005384   77242 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0205 03:30:46.127130   77242 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0205 03:30:46.127269   77242 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0205 03:30:44.111979   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:30:44.112507   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | unable to find current IP address of domain enable-default-cni-253147 in network mk-enable-default-cni-253147
	I0205 03:30:44.112555   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | I0205 03:30:44.112499   79034 retry.go:31] will retry after 1.790961133s: waiting for domain to come up
	I0205 03:30:45.905133   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:30:45.905720   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | unable to find current IP address of domain enable-default-cni-253147 in network mk-enable-default-cni-253147
	I0205 03:30:45.905748   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | I0205 03:30:45.905692   79034 retry.go:31] will retry after 1.666031127s: waiting for domain to come up
	I0205 03:30:47.573282   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:30:47.573933   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | unable to find current IP address of domain enable-default-cni-253147 in network mk-enable-default-cni-253147
	I0205 03:30:47.573968   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | I0205 03:30:47.573886   79034 retry.go:31] will retry after 1.867135722s: waiting for domain to come up
	I0205 03:30:47.128625   77242 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001366822s
	I0205 03:30:47.128739   77242 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0205 03:30:51.630336   77242 kubeadm.go:310] [api-check] The API server is healthy after 4.501384276s
	I0205 03:30:51.643882   77242 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0205 03:30:52.160875   77242 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0205 03:30:52.194216   77242 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0205 03:30:52.194481   77242 kubeadm.go:310] [mark-control-plane] Marking the node bridge-253147 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0205 03:30:52.204470   77242 kubeadm.go:310] [bootstrap-token] Using token: cylh84.xficas9ll5cpdlvf
	I0205 03:30:49.444153   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:30:49.444687   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | unable to find current IP address of domain enable-default-cni-253147 in network mk-enable-default-cni-253147
	I0205 03:30:49.444714   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | I0205 03:30:49.444635   79034 retry.go:31] will retry after 2.913102259s: waiting for domain to come up
	I0205 03:30:52.359492   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:30:52.360086   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | unable to find current IP address of domain enable-default-cni-253147 in network mk-enable-default-cni-253147
	I0205 03:30:52.360115   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | I0205 03:30:52.360057   79034 retry.go:31] will retry after 4.239584755s: waiting for domain to come up
	I0205 03:30:52.205969   77242 out.go:235]   - Configuring RBAC rules ...
	I0205 03:30:52.206118   77242 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0205 03:30:52.212375   77242 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0205 03:30:52.218595   77242 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0205 03:30:52.221987   77242 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0205 03:30:52.225513   77242 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0205 03:30:52.231498   77242 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0205 03:30:52.355302   77242 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0205 03:30:52.792077   77242 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0205 03:30:53.360431   77242 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0205 03:30:53.360467   77242 kubeadm.go:310] 
	I0205 03:30:53.360595   77242 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0205 03:30:53.360622   77242 kubeadm.go:310] 
	I0205 03:30:53.360747   77242 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0205 03:30:53.360762   77242 kubeadm.go:310] 
	I0205 03:30:53.360805   77242 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0205 03:30:53.360886   77242 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0205 03:30:53.360965   77242 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0205 03:30:53.360977   77242 kubeadm.go:310] 
	I0205 03:30:53.361085   77242 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0205 03:30:53.361103   77242 kubeadm.go:310] 
	I0205 03:30:53.361154   77242 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0205 03:30:53.361174   77242 kubeadm.go:310] 
	I0205 03:30:53.361233   77242 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0205 03:30:53.361360   77242 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0205 03:30:53.361462   77242 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0205 03:30:53.361470   77242 kubeadm.go:310] 
	I0205 03:30:53.361571   77242 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0205 03:30:53.361685   77242 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0205 03:30:53.361694   77242 kubeadm.go:310] 
	I0205 03:30:53.361795   77242 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token cylh84.xficas9ll5cpdlvf \
	I0205 03:30:53.361931   77242 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d4171302552bdc887375c8f8e3004ed013421bed3bb373f5cbe18dbecc68169 \
	I0205 03:30:53.361962   77242 kubeadm.go:310] 	--control-plane 
	I0205 03:30:53.361979   77242 kubeadm.go:310] 
	I0205 03:30:53.362083   77242 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0205 03:30:53.362097   77242 kubeadm.go:310] 
	I0205 03:30:53.362206   77242 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token cylh84.xficas9ll5cpdlvf \
	I0205 03:30:53.362326   77242 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d4171302552bdc887375c8f8e3004ed013421bed3bb373f5cbe18dbecc68169 
	I0205 03:30:53.362602   77242 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0205 03:30:53.362732   77242 cni.go:84] Creating CNI manager for "bridge"
	I0205 03:30:53.364286   77242 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0205 03:30:53.365521   77242 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0205 03:30:53.380070   77242 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0205 03:30:53.397428   77242 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0205 03:30:53.397528   77242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 03:30:53.397539   77242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-253147 minikube.k8s.io/updated_at=2025_02_05T03_30_53_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=e14c106b6cd223b80b8f7425af26284a33f75c3d minikube.k8s.io/name=bridge-253147 minikube.k8s.io/primary=true
	I0205 03:30:53.516425   77242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 03:30:53.550197   77242 ops.go:34] apiserver oom_adj: -16
	I0205 03:30:54.016512   77242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 03:30:54.516802   77242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 03:30:55.016599   77242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 03:30:55.516750   77242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 03:30:56.017128   77242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 03:30:56.516827   77242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 03:30:57.017214   77242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 03:30:57.124482   77242 kubeadm.go:1113] duration metric: took 3.727019948s to wait for elevateKubeSystemPrivileges
	I0205 03:30:57.124523   77242 kubeadm.go:394] duration metric: took 13.779991885s to StartCluster
	I0205 03:30:57.124540   77242 settings.go:142] acquiring lock: {Name:mk2eca847da5ba78f5b041a83e5cfcbdebb0c621 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:30:57.124619   77242 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20363-12788/kubeconfig
	I0205 03:30:57.125807   77242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/kubeconfig: {Name:mkb405c4292c681fd728af1af9684132d7e45754 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:30:57.126067   77242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0205 03:30:57.126069   77242 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.50.246 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0205 03:30:57.126143   77242 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0205 03:30:57.126263   77242 config.go:182] Loaded profile config "bridge-253147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 03:30:57.126269   77242 addons.go:69] Setting storage-provisioner=true in profile "bridge-253147"
	I0205 03:30:57.126285   77242 addons.go:69] Setting default-storageclass=true in profile "bridge-253147"
	I0205 03:30:57.126336   77242 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-253147"
	I0205 03:30:57.126289   77242 addons.go:238] Setting addon storage-provisioner=true in "bridge-253147"
	I0205 03:30:57.126431   77242 host.go:66] Checking if "bridge-253147" exists ...
	I0205 03:30:57.126791   77242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:30:57.126821   77242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:30:57.126827   77242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:30:57.126870   77242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:30:57.128477   77242 out.go:177] * Verifying Kubernetes components...
	I0205 03:30:57.129687   77242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0205 03:30:57.142273   77242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43155
	I0205 03:30:57.142285   77242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38755
	I0205 03:30:57.142713   77242 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:30:57.142745   77242 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:30:57.143234   77242 main.go:141] libmachine: Using API Version  1
	I0205 03:30:57.143236   77242 main.go:141] libmachine: Using API Version  1
	I0205 03:30:57.143257   77242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:30:57.143274   77242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:30:57.143634   77242 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:30:57.143692   77242 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:30:57.143929   77242 main.go:141] libmachine: (bridge-253147) Calling .GetState
	I0205 03:30:57.144286   77242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:30:57.144329   77242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:30:57.147278   77242 addons.go:238] Setting addon default-storageclass=true in "bridge-253147"
	I0205 03:30:57.147316   77242 host.go:66] Checking if "bridge-253147" exists ...
	I0205 03:30:57.147680   77242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:30:57.147729   77242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:30:57.160541   77242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46319
	I0205 03:30:57.160988   77242 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:30:57.161624   77242 main.go:141] libmachine: Using API Version  1
	I0205 03:30:57.161653   77242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:30:57.162040   77242 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:30:57.162245   77242 main.go:141] libmachine: (bridge-253147) Calling .GetState
	I0205 03:30:57.162579   77242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32957
	I0205 03:30:57.162951   77242 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:30:57.163339   77242 main.go:141] libmachine: Using API Version  1
	I0205 03:30:57.163362   77242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:30:57.163669   77242 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:30:57.164316   77242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:30:57.164359   77242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:30:57.164577   77242 main.go:141] libmachine: (bridge-253147) Calling .DriverName
	I0205 03:30:57.166160   77242 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0205 03:30:57.167311   77242 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0205 03:30:57.167328   77242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0205 03:30:57.167341   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHHostname
	I0205 03:30:57.170492   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:57.170949   77242 main.go:141] libmachine: (bridge-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:4a:f9", ip: ""} in network mk-bridge-253147: {Iface:virbr1 ExpiryTime:2025-02-05 04:30:27 +0000 UTC Type:0 Mac:52:54:00:8f:4a:f9 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:bridge-253147 Clientid:01:52:54:00:8f:4a:f9}
	I0205 03:30:57.170979   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined IP address 192.168.50.246 and MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:57.171259   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHPort
	I0205 03:30:57.171440   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHKeyPath
	I0205 03:30:57.171584   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHUsername
	I0205 03:30:57.171739   77242 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/bridge-253147/id_rsa Username:docker}
	I0205 03:30:57.180157   77242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38219
	I0205 03:30:57.180673   77242 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:30:57.181154   77242 main.go:141] libmachine: Using API Version  1
	I0205 03:30:57.181177   77242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:30:57.181558   77242 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:30:57.181780   77242 main.go:141] libmachine: (bridge-253147) Calling .GetState
	I0205 03:30:57.183306   77242 main.go:141] libmachine: (bridge-253147) Calling .DriverName
	I0205 03:30:57.183531   77242 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0205 03:30:57.183546   77242 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0205 03:30:57.183562   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHHostname
	I0205 03:30:57.186167   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:57.186542   77242 main.go:141] libmachine: (bridge-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:4a:f9", ip: ""} in network mk-bridge-253147: {Iface:virbr1 ExpiryTime:2025-02-05 04:30:27 +0000 UTC Type:0 Mac:52:54:00:8f:4a:f9 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:bridge-253147 Clientid:01:52:54:00:8f:4a:f9}
	I0205 03:30:57.186570   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined IP address 192.168.50.246 and MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:57.186722   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHPort
	I0205 03:30:57.186925   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHKeyPath
	I0205 03:30:57.187092   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHUsername
	I0205 03:30:57.187223   77242 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/bridge-253147/id_rsa Username:docker}
	I0205 03:30:57.353736   77242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0205 03:30:57.353846   77242 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0205 03:30:57.377831   77242 node_ready.go:35] waiting up to 15m0s for node "bridge-253147" to be "Ready" ...
	I0205 03:30:57.414576   77242 node_ready.go:49] node "bridge-253147" has status "Ready":"True"
	I0205 03:30:57.414599   77242 node_ready.go:38] duration metric: took 36.726589ms for node "bridge-253147" to be "Ready" ...
	I0205 03:30:57.414609   77242 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0205 03:30:57.434337   77242 pod_ready.go:79] waiting up to 15m0s for pod "etcd-bridge-253147" in "kube-system" namespace to be "Ready" ...
	I0205 03:30:57.501961   77242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0205 03:30:57.530380   77242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0205 03:30:57.754080   77242 start.go:971] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0205 03:30:57.802527   77242 main.go:141] libmachine: Making call to close driver server
	I0205 03:30:57.802558   77242 main.go:141] libmachine: (bridge-253147) Calling .Close
	I0205 03:30:57.802869   77242 main.go:141] libmachine: Successfully made call to close driver server
	I0205 03:30:57.802889   77242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 03:30:57.802900   77242 main.go:141] libmachine: Making call to close driver server
	I0205 03:30:57.802909   77242 main.go:141] libmachine: (bridge-253147) Calling .Close
	I0205 03:30:57.803171   77242 main.go:141] libmachine: Successfully made call to close driver server
	I0205 03:30:57.803178   77242 main.go:141] libmachine: (bridge-253147) DBG | Closing plugin on server side
	I0205 03:30:57.803187   77242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 03:30:57.818026   77242 main.go:141] libmachine: Making call to close driver server
	I0205 03:30:57.818048   77242 main.go:141] libmachine: (bridge-253147) Calling .Close
	I0205 03:30:57.818320   77242 main.go:141] libmachine: (bridge-253147) DBG | Closing plugin on server side
	I0205 03:30:57.818383   77242 main.go:141] libmachine: Successfully made call to close driver server
	I0205 03:30:57.818396   77242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 03:30:58.036646   77242 main.go:141] libmachine: Making call to close driver server
	I0205 03:30:58.036669   77242 main.go:141] libmachine: (bridge-253147) Calling .Close
	I0205 03:30:58.036933   77242 main.go:141] libmachine: Successfully made call to close driver server
	I0205 03:30:58.036945   77242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 03:30:58.036953   77242 main.go:141] libmachine: Making call to close driver server
	I0205 03:30:58.036959   77242 main.go:141] libmachine: (bridge-253147) Calling .Close
	I0205 03:30:58.037451   77242 main.go:141] libmachine: (bridge-253147) DBG | Closing plugin on server side
	I0205 03:30:58.037460   77242 main.go:141] libmachine: Successfully made call to close driver server
	I0205 03:30:58.037478   77242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 03:30:58.038806   77242 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0205 03:30:56.600932   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:30:56.601508   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | unable to find current IP address of domain enable-default-cni-253147 in network mk-enable-default-cni-253147
	I0205 03:30:56.601541   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | I0205 03:30:56.601477   79034 retry.go:31] will retry after 3.74327237s: waiting for domain to come up
	I0205 03:30:58.039791   77242 addons.go:514] duration metric: took 913.647604ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0205 03:30:58.259520   77242 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-253147" context rescaled to 1 replicas
	I0205 03:30:59.439609   77242 pod_ready.go:103] pod "etcd-bridge-253147" in "kube-system" namespace has status "Ready":"False"
	I0205 03:31:01.440073   77242 pod_ready.go:103] pod "etcd-bridge-253147" in "kube-system" namespace has status "Ready":"False"
	I0205 03:31:00.349372   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:00.349816   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has current primary IP address 192.168.72.143 and MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:00.349855   77491 main.go:141] libmachine: (enable-default-cni-253147) found domain IP: 192.168.72.143
	I0205 03:31:00.349879   77491 main.go:141] libmachine: (enable-default-cni-253147) reserving static IP address...
	I0205 03:31:00.350085   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | unable to find host DHCP lease matching {name: "enable-default-cni-253147", mac: "52:54:00:f2:b1:0a", ip: "192.168.72.143"} in network mk-enable-default-cni-253147
	I0205 03:31:00.427216   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | Getting to WaitForSSH function...
	I0205 03:31:00.427258   77491 main.go:141] libmachine: (enable-default-cni-253147) reserved static IP address 192.168.72.143 for domain enable-default-cni-253147
	I0205 03:31:00.427290   77491 main.go:141] libmachine: (enable-default-cni-253147) waiting for SSH...
	I0205 03:31:00.429895   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:00.430236   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:f2:b1:0a", ip: ""} in network mk-enable-default-cni-253147
	I0205 03:31:00.430266   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | unable to find defined IP address of network mk-enable-default-cni-253147 interface with MAC address 52:54:00:f2:b1:0a
	I0205 03:31:00.430425   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | Using SSH client type: external
	I0205 03:31:00.430455   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | Using SSH private key: /home/jenkins/minikube-integration/20363-12788/.minikube/machines/enable-default-cni-253147/id_rsa (-rw-------)
	I0205 03:31:00.430493   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20363-12788/.minikube/machines/enable-default-cni-253147/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0205 03:31:00.430517   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | About to run SSH command:
	I0205 03:31:00.430543   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | exit 0
	I0205 03:31:00.434282   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | SSH cmd err, output: exit status 255: 
	I0205 03:31:00.434311   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0205 03:31:00.434322   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | command : exit 0
	I0205 03:31:00.434334   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | err     : exit status 255
	I0205 03:31:00.434347   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | output  : 
	I0205 03:31:02.439676   77242 pod_ready.go:93] pod "etcd-bridge-253147" in "kube-system" namespace has status "Ready":"True"
	I0205 03:31:02.439701   77242 pod_ready.go:82] duration metric: took 5.005335594s for pod "etcd-bridge-253147" in "kube-system" namespace to be "Ready" ...
	I0205 03:31:02.439711   77242 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-bridge-253147" in "kube-system" namespace to be "Ready" ...
	I0205 03:31:02.443180   77242 pod_ready.go:93] pod "kube-apiserver-bridge-253147" in "kube-system" namespace has status "Ready":"True"
	I0205 03:31:02.443200   77242 pod_ready.go:82] duration metric: took 3.483384ms for pod "kube-apiserver-bridge-253147" in "kube-system" namespace to be "Ready" ...
	I0205 03:31:02.443208   77242 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-bridge-253147" in "kube-system" namespace to be "Ready" ...
	I0205 03:31:04.449470   77242 pod_ready.go:103] pod "kube-controller-manager-bridge-253147" in "kube-system" namespace has status "Ready":"False"
	I0205 03:31:04.949016   77242 pod_ready.go:93] pod "kube-controller-manager-bridge-253147" in "kube-system" namespace has status "Ready":"True"
	I0205 03:31:04.949040   77242 pod_ready.go:82] duration metric: took 2.505824176s for pod "kube-controller-manager-bridge-253147" in "kube-system" namespace to be "Ready" ...
	I0205 03:31:04.949054   77242 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-tznhk" in "kube-system" namespace to be "Ready" ...
	I0205 03:31:04.953268   77242 pod_ready.go:93] pod "kube-proxy-tznhk" in "kube-system" namespace has status "Ready":"True"
	I0205 03:31:04.953291   77242 pod_ready.go:82] duration metric: took 4.228529ms for pod "kube-proxy-tznhk" in "kube-system" namespace to be "Ready" ...
	I0205 03:31:04.953302   77242 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-bridge-253147" in "kube-system" namespace to be "Ready" ...
	I0205 03:31:04.956495   77242 pod_ready.go:93] pod "kube-scheduler-bridge-253147" in "kube-system" namespace has status "Ready":"True"
	I0205 03:31:04.956514   77242 pod_ready.go:82] duration metric: took 3.2049ms for pod "kube-scheduler-bridge-253147" in "kube-system" namespace to be "Ready" ...
	I0205 03:31:04.956524   77242 pod_ready.go:39] duration metric: took 7.541903694s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0205 03:31:04.956542   77242 api_server.go:52] waiting for apiserver process to appear ...
	I0205 03:31:04.956595   77242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:31:04.972420   77242 api_server.go:72] duration metric: took 7.846318586s to wait for apiserver process to appear ...
	I0205 03:31:04.972448   77242 api_server.go:88] waiting for apiserver healthz status ...
	I0205 03:31:04.972468   77242 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0205 03:31:04.977100   77242 api_server.go:279] https://192.168.50.246:8443/healthz returned 200:
	ok
	I0205 03:31:04.978247   77242 api_server.go:141] control plane version: v1.32.1
	I0205 03:31:04.978277   77242 api_server.go:131] duration metric: took 5.821325ms to wait for apiserver health ...
	I0205 03:31:04.978287   77242 system_pods.go:43] waiting for kube-system pods to appear ...
	I0205 03:31:04.980794   77242 system_pods.go:59] 7 kube-system pods found
	I0205 03:31:04.980825   77242 system_pods.go:61] "coredns-668d6bf9bc-w4q4d" [a2c00545-1eec-40a6-b4c6-0496a18806e4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0205 03:31:04.980831   77242 system_pods.go:61] "etcd-bridge-253147" [f40b76f0-91f6-42de-b983-960541a36f7f] Running
	I0205 03:31:04.980836   77242 system_pods.go:61] "kube-apiserver-bridge-253147" [3b9d175a-2a6b-40f4-b941-e34a2c8dd770] Running
	I0205 03:31:04.980840   77242 system_pods.go:61] "kube-controller-manager-bridge-253147" [7f885e6d-46f5-4db7-983f-0ff5cc6fe11e] Running
	I0205 03:31:04.980844   77242 system_pods.go:61] "kube-proxy-tznhk" [25ee03b7-9305-4158-acea-769f9f5c3e80] Running
	I0205 03:31:04.980847   77242 system_pods.go:61] "kube-scheduler-bridge-253147" [3e6c7848-d410-4132-b8fa-ec9298afbafb] Running
	I0205 03:31:04.980850   77242 system_pods.go:61] "storage-provisioner" [0cc7c11d-e735-4916-9fab-0f7be7596b7b] Running
	I0205 03:31:04.980855   77242 system_pods.go:74] duration metric: took 2.562597ms to wait for pod list to return data ...
	I0205 03:31:04.980862   77242 default_sa.go:34] waiting for default service account to be created ...
	I0205 03:31:04.982923   77242 default_sa.go:45] found service account: "default"
	I0205 03:31:04.982942   77242 default_sa.go:55] duration metric: took 2.0718ms for default service account to be created ...
	I0205 03:31:04.982952   77242 system_pods.go:116] waiting for k8s-apps to be running ...
	I0205 03:31:05.038759   77242 system_pods.go:86] 7 kube-system pods found
	I0205 03:31:05.038796   77242 system_pods.go:89] "coredns-668d6bf9bc-w4q4d" [a2c00545-1eec-40a6-b4c6-0496a18806e4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0205 03:31:05.038806   77242 system_pods.go:89] "etcd-bridge-253147" [f40b76f0-91f6-42de-b983-960541a36f7f] Running
	I0205 03:31:05.038821   77242 system_pods.go:89] "kube-apiserver-bridge-253147" [3b9d175a-2a6b-40f4-b941-e34a2c8dd770] Running
	I0205 03:31:05.038830   77242 system_pods.go:89] "kube-controller-manager-bridge-253147" [7f885e6d-46f5-4db7-983f-0ff5cc6fe11e] Running
	I0205 03:31:05.038836   77242 system_pods.go:89] "kube-proxy-tznhk" [25ee03b7-9305-4158-acea-769f9f5c3e80] Running
	I0205 03:31:05.038842   77242 system_pods.go:89] "kube-scheduler-bridge-253147" [3e6c7848-d410-4132-b8fa-ec9298afbafb] Running
	I0205 03:31:05.038850   77242 system_pods.go:89] "storage-provisioner" [0cc7c11d-e735-4916-9fab-0f7be7596b7b] Running
	I0205 03:31:05.038858   77242 system_pods.go:126] duration metric: took 55.900127ms to wait for k8s-apps to be running ...
	I0205 03:31:05.038870   77242 system_svc.go:44] waiting for kubelet service to be running ....
	I0205 03:31:05.038916   77242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0205 03:31:05.058257   77242 system_svc.go:56] duration metric: took 19.375476ms WaitForService to wait for kubelet
	I0205 03:31:05.058293   77242 kubeadm.go:582] duration metric: took 7.932197543s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0205 03:31:05.058311   77242 node_conditions.go:102] verifying NodePressure condition ...
	I0205 03:31:05.239036   77242 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0205 03:31:05.239071   77242 node_conditions.go:123] node cpu capacity is 2
	I0205 03:31:05.239087   77242 node_conditions.go:105] duration metric: took 180.769579ms to run NodePressure ...
	I0205 03:31:05.239101   77242 start.go:241] waiting for startup goroutines ...
	I0205 03:31:05.239112   77242 start.go:246] waiting for cluster config update ...
	I0205 03:31:05.239124   77242 start.go:255] writing updated cluster config ...
	I0205 03:31:05.239493   77242 ssh_runner.go:195] Run: rm -f paused
	I0205 03:31:05.294017   77242 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0205 03:31:05.296746   77242 out.go:177] * Done! kubectl is now configured to use "bridge-253147" cluster and "default" namespace by default
	I0205 03:31:03.434515   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | Getting to WaitForSSH function...
	I0205 03:31:03.436973   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:03.437300   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:b1:0a", ip: ""} in network mk-enable-default-cni-253147: {Iface:virbr4 ExpiryTime:2025-02-05 04:30:52 +0000 UTC Type:0 Mac:52:54:00:f2:b1:0a Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:enable-default-cni-253147 Clientid:01:52:54:00:f2:b1:0a}
	I0205 03:31:03.437328   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined IP address 192.168.72.143 and MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:03.437488   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | Using SSH client type: external
	I0205 03:31:03.437517   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | Using SSH private key: /home/jenkins/minikube-integration/20363-12788/.minikube/machines/enable-default-cni-253147/id_rsa (-rw-------)
	I0205 03:31:03.437552   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.143 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20363-12788/.minikube/machines/enable-default-cni-253147/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0205 03:31:03.437566   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | About to run SSH command:
	I0205 03:31:03.437582   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | exit 0
	I0205 03:31:03.569720   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | SSH cmd err, output: <nil>: 
	I0205 03:31:03.570019   77491 main.go:141] libmachine: (enable-default-cni-253147) KVM machine creation complete
	I0205 03:31:03.570398   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetConfigRaw
	I0205 03:31:03.571050   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .DriverName
	I0205 03:31:03.571248   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .DriverName
	I0205 03:31:03.571394   77491 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0205 03:31:03.571410   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetState
	I0205 03:31:03.572652   77491 main.go:141] libmachine: Detecting operating system of created instance...
	I0205 03:31:03.572671   77491 main.go:141] libmachine: Waiting for SSH to be available...
	I0205 03:31:03.572678   77491 main.go:141] libmachine: Getting to WaitForSSH function...
	I0205 03:31:03.572687   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHHostname
	I0205 03:31:03.574885   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:03.575234   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:b1:0a", ip: ""} in network mk-enable-default-cni-253147: {Iface:virbr4 ExpiryTime:2025-02-05 04:30:52 +0000 UTC Type:0 Mac:52:54:00:f2:b1:0a Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:enable-default-cni-253147 Clientid:01:52:54:00:f2:b1:0a}
	I0205 03:31:03.575265   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined IP address 192.168.72.143 and MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:03.575429   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHPort
	I0205 03:31:03.575627   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHKeyPath
	I0205 03:31:03.575781   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHKeyPath
	I0205 03:31:03.575898   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHUsername
	I0205 03:31:03.576046   77491 main.go:141] libmachine: Using SSH client type: native
	I0205 03:31:03.576235   77491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.143 22 <nil> <nil>}
	I0205 03:31:03.576246   77491 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0205 03:31:03.688735   77491 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0205 03:31:03.688761   77491 main.go:141] libmachine: Detecting the provisioner...
	I0205 03:31:03.688784   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHHostname
	I0205 03:31:03.691744   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:03.692124   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:b1:0a", ip: ""} in network mk-enable-default-cni-253147: {Iface:virbr4 ExpiryTime:2025-02-05 04:30:52 +0000 UTC Type:0 Mac:52:54:00:f2:b1:0a Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:enable-default-cni-253147 Clientid:01:52:54:00:f2:b1:0a}
	I0205 03:31:03.692171   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined IP address 192.168.72.143 and MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:03.692296   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHPort
	I0205 03:31:03.692475   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHKeyPath
	I0205 03:31:03.692610   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHKeyPath
	I0205 03:31:03.692728   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHUsername
	I0205 03:31:03.692870   77491 main.go:141] libmachine: Using SSH client type: native
	I0205 03:31:03.693036   77491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.143 22 <nil> <nil>}
	I0205 03:31:03.693047   77491 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0205 03:31:03.806008   77491 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0205 03:31:03.806079   77491 main.go:141] libmachine: found compatible host: buildroot
	I0205 03:31:03.806085   77491 main.go:141] libmachine: Provisioning with buildroot...
	I0205 03:31:03.806092   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetMachineName
	I0205 03:31:03.806338   77491 buildroot.go:166] provisioning hostname "enable-default-cni-253147"
	I0205 03:31:03.806365   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetMachineName
	I0205 03:31:03.806532   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHHostname
	I0205 03:31:03.809230   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:03.809596   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:b1:0a", ip: ""} in network mk-enable-default-cni-253147: {Iface:virbr4 ExpiryTime:2025-02-05 04:30:52 +0000 UTC Type:0 Mac:52:54:00:f2:b1:0a Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:enable-default-cni-253147 Clientid:01:52:54:00:f2:b1:0a}
	I0205 03:31:03.809630   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined IP address 192.168.72.143 and MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:03.809771   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHPort
	I0205 03:31:03.809956   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHKeyPath
	I0205 03:31:03.810078   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHKeyPath
	I0205 03:31:03.810257   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHUsername
	I0205 03:31:03.810421   77491 main.go:141] libmachine: Using SSH client type: native
	I0205 03:31:03.810633   77491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.143 22 <nil> <nil>}
	I0205 03:31:03.810647   77491 main.go:141] libmachine: About to run SSH command:
	sudo hostname enable-default-cni-253147 && echo "enable-default-cni-253147" | sudo tee /etc/hostname
	I0205 03:31:03.941615   77491 main.go:141] libmachine: SSH cmd err, output: <nil>: enable-default-cni-253147
	
	I0205 03:31:03.941643   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHHostname
	I0205 03:31:03.944752   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:03.945135   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:b1:0a", ip: ""} in network mk-enable-default-cni-253147: {Iface:virbr4 ExpiryTime:2025-02-05 04:30:52 +0000 UTC Type:0 Mac:52:54:00:f2:b1:0a Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:enable-default-cni-253147 Clientid:01:52:54:00:f2:b1:0a}
	I0205 03:31:03.945170   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined IP address 192.168.72.143 and MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:03.945409   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHPort
	I0205 03:31:03.945623   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHKeyPath
	I0205 03:31:03.945809   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHKeyPath
	I0205 03:31:03.945969   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHUsername
	I0205 03:31:03.946143   77491 main.go:141] libmachine: Using SSH client type: native
	I0205 03:31:03.946376   77491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.143 22 <nil> <nil>}
	I0205 03:31:03.946413   77491 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\senable-default-cni-253147' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 enable-default-cni-253147/g' /etc/hosts;
				else 
					echo '127.0.1.1 enable-default-cni-253147' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0205 03:31:04.069933   77491 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0205 03:31:04.069966   77491 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20363-12788/.minikube CaCertPath:/home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20363-12788/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20363-12788/.minikube}
	I0205 03:31:04.070034   77491 buildroot.go:174] setting up certificates
	I0205 03:31:04.070049   77491 provision.go:84] configureAuth start
	I0205 03:31:04.070065   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetMachineName
	I0205 03:31:04.070383   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetIP
	I0205 03:31:04.073266   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:04.073601   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:b1:0a", ip: ""} in network mk-enable-default-cni-253147: {Iface:virbr4 ExpiryTime:2025-02-05 04:30:52 +0000 UTC Type:0 Mac:52:54:00:f2:b1:0a Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:enable-default-cni-253147 Clientid:01:52:54:00:f2:b1:0a}
	I0205 03:31:04.073628   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined IP address 192.168.72.143 and MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:04.073751   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHHostname
	I0205 03:31:04.076116   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:04.076479   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:b1:0a", ip: ""} in network mk-enable-default-cni-253147: {Iface:virbr4 ExpiryTime:2025-02-05 04:30:52 +0000 UTC Type:0 Mac:52:54:00:f2:b1:0a Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:enable-default-cni-253147 Clientid:01:52:54:00:f2:b1:0a}
	I0205 03:31:04.076514   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined IP address 192.168.72.143 and MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:04.076673   77491 provision.go:143] copyHostCerts
	I0205 03:31:04.076745   77491 exec_runner.go:144] found /home/jenkins/minikube-integration/20363-12788/.minikube/ca.pem, removing ...
	I0205 03:31:04.076762   77491 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20363-12788/.minikube/ca.pem
	I0205 03:31:04.076827   77491 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20363-12788/.minikube/ca.pem (1082 bytes)
	I0205 03:31:04.076947   77491 exec_runner.go:144] found /home/jenkins/minikube-integration/20363-12788/.minikube/cert.pem, removing ...
	I0205 03:31:04.076958   77491 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20363-12788/.minikube/cert.pem
	I0205 03:31:04.076995   77491 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20363-12788/.minikube/cert.pem (1123 bytes)
	I0205 03:31:04.077070   77491 exec_runner.go:144] found /home/jenkins/minikube-integration/20363-12788/.minikube/key.pem, removing ...
	I0205 03:31:04.077081   77491 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20363-12788/.minikube/key.pem
	I0205 03:31:04.077109   77491 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20363-12788/.minikube/key.pem (1675 bytes)
	I0205 03:31:04.077217   77491 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20363-12788/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca-key.pem org=jenkins.enable-default-cni-253147 san=[127.0.0.1 192.168.72.143 enable-default-cni-253147 localhost minikube]
	I0205 03:31:04.251531   77491 provision.go:177] copyRemoteCerts
	I0205 03:31:04.251605   77491 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0205 03:31:04.251640   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHHostname
	I0205 03:31:04.254390   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:04.254683   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:b1:0a", ip: ""} in network mk-enable-default-cni-253147: {Iface:virbr4 ExpiryTime:2025-02-05 04:30:52 +0000 UTC Type:0 Mac:52:54:00:f2:b1:0a Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:enable-default-cni-253147 Clientid:01:52:54:00:f2:b1:0a}
	I0205 03:31:04.254732   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined IP address 192.168.72.143 and MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:04.254918   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHPort
	I0205 03:31:04.255115   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHKeyPath
	I0205 03:31:04.255296   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHUsername
	I0205 03:31:04.255435   77491 sshutil.go:53] new ssh client: &{IP:192.168.72.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/enable-default-cni-253147/id_rsa Username:docker}
	I0205 03:31:04.343524   77491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0205 03:31:04.371060   77491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0205 03:31:04.398285   77491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0205 03:31:04.424766   77491 provision.go:87] duration metric: took 354.702782ms to configureAuth
	I0205 03:31:04.424795   77491 buildroot.go:189] setting minikube options for container-runtime
	I0205 03:31:04.424952   77491 config.go:182] Loaded profile config "enable-default-cni-253147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 03:31:04.425016   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHHostname
	I0205 03:31:04.427625   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:04.427918   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:b1:0a", ip: ""} in network mk-enable-default-cni-253147: {Iface:virbr4 ExpiryTime:2025-02-05 04:30:52 +0000 UTC Type:0 Mac:52:54:00:f2:b1:0a Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:enable-default-cni-253147 Clientid:01:52:54:00:f2:b1:0a}
	I0205 03:31:04.427941   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined IP address 192.168.72.143 and MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:04.428113   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHPort
	I0205 03:31:04.428367   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHKeyPath
	I0205 03:31:04.428554   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHKeyPath
	I0205 03:31:04.428699   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHUsername
	I0205 03:31:04.428855   77491 main.go:141] libmachine: Using SSH client type: native
	I0205 03:31:04.429035   77491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.143 22 <nil> <nil>}
	I0205 03:31:04.429053   77491 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0205 03:31:04.669877   77491 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0205 03:31:04.669925   77491 main.go:141] libmachine: Checking connection to Docker...
	I0205 03:31:04.669936   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetURL
	I0205 03:31:04.671447   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | using libvirt version 6000000
	I0205 03:31:04.673878   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:04.674280   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:b1:0a", ip: ""} in network mk-enable-default-cni-253147: {Iface:virbr4 ExpiryTime:2025-02-05 04:30:52 +0000 UTC Type:0 Mac:52:54:00:f2:b1:0a Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:enable-default-cni-253147 Clientid:01:52:54:00:f2:b1:0a}
	I0205 03:31:04.674314   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined IP address 192.168.72.143 and MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:04.674469   77491 main.go:141] libmachine: Docker is up and running!
	I0205 03:31:04.674490   77491 main.go:141] libmachine: Reticulating splines...
	I0205 03:31:04.674496   77491 client.go:171] duration metric: took 28.085948821s to LocalClient.Create
	I0205 03:31:04.674515   77491 start.go:167] duration metric: took 28.08601116s to libmachine.API.Create "enable-default-cni-253147"
	I0205 03:31:04.674525   77491 start.go:293] postStartSetup for "enable-default-cni-253147" (driver="kvm2")
	I0205 03:31:04.674534   77491 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0205 03:31:04.674551   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .DriverName
	I0205 03:31:04.674777   77491 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0205 03:31:04.674799   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHHostname
	I0205 03:31:04.677166   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:04.677546   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:b1:0a", ip: ""} in network mk-enable-default-cni-253147: {Iface:virbr4 ExpiryTime:2025-02-05 04:30:52 +0000 UTC Type:0 Mac:52:54:00:f2:b1:0a Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:enable-default-cni-253147 Clientid:01:52:54:00:f2:b1:0a}
	I0205 03:31:04.677583   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined IP address 192.168.72.143 and MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:04.677719   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHPort
	I0205 03:31:04.677934   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHKeyPath
	I0205 03:31:04.678110   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHUsername
	I0205 03:31:04.678319   77491 sshutil.go:53] new ssh client: &{IP:192.168.72.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/enable-default-cni-253147/id_rsa Username:docker}
	I0205 03:31:04.765407   77491 ssh_runner.go:195] Run: cat /etc/os-release
	I0205 03:31:04.769563   77491 info.go:137] Remote host: Buildroot 2023.02.9
	I0205 03:31:04.769590   77491 filesync.go:126] Scanning /home/jenkins/minikube-integration/20363-12788/.minikube/addons for local assets ...
	I0205 03:31:04.769676   77491 filesync.go:126] Scanning /home/jenkins/minikube-integration/20363-12788/.minikube/files for local assets ...
	I0205 03:31:04.769804   77491 filesync.go:149] local asset: /home/jenkins/minikube-integration/20363-12788/.minikube/files/etc/ssl/certs/199892.pem -> 199892.pem in /etc/ssl/certs
	I0205 03:31:04.769960   77491 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0205 03:31:04.780900   77491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/files/etc/ssl/certs/199892.pem --> /etc/ssl/certs/199892.pem (1708 bytes)
	I0205 03:31:04.803771   77491 start.go:296] duration metric: took 129.206676ms for postStartSetup
	I0205 03:31:04.803864   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetConfigRaw
	I0205 03:31:04.804597   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetIP
	I0205 03:31:04.807183   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:04.807475   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:b1:0a", ip: ""} in network mk-enable-default-cni-253147: {Iface:virbr4 ExpiryTime:2025-02-05 04:30:52 +0000 UTC Type:0 Mac:52:54:00:f2:b1:0a Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:enable-default-cni-253147 Clientid:01:52:54:00:f2:b1:0a}
	I0205 03:31:04.807496   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined IP address 192.168.72.143 and MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:04.807782   77491 profile.go:143] Saving config to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/config.json ...
	I0205 03:31:04.808013   77491 start.go:128] duration metric: took 28.241451408s to createHost
	I0205 03:31:04.808036   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHHostname
	I0205 03:31:04.810436   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:04.810787   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:b1:0a", ip: ""} in network mk-enable-default-cni-253147: {Iface:virbr4 ExpiryTime:2025-02-05 04:30:52 +0000 UTC Type:0 Mac:52:54:00:f2:b1:0a Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:enable-default-cni-253147 Clientid:01:52:54:00:f2:b1:0a}
	I0205 03:31:04.810814   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined IP address 192.168.72.143 and MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:04.810919   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHPort
	I0205 03:31:04.811109   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHKeyPath
	I0205 03:31:04.811238   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHKeyPath
	I0205 03:31:04.811355   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHUsername
	I0205 03:31:04.811504   77491 main.go:141] libmachine: Using SSH client type: native
	I0205 03:31:04.811715   77491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.143 22 <nil> <nil>}
	I0205 03:31:04.811732   77491 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0205 03:31:04.926002   77491 main.go:141] libmachine: SSH cmd err, output: <nil>: 1738726264.914833275
	
	I0205 03:31:04.926023   77491 fix.go:216] guest clock: 1738726264.914833275
	I0205 03:31:04.926030   77491 fix.go:229] Guest: 2025-02-05 03:31:04.914833275 +0000 UTC Remote: 2025-02-05 03:31:04.808026342 +0000 UTC m=+51.788410297 (delta=106.806933ms)
	I0205 03:31:04.926064   77491 fix.go:200] guest clock delta is within tolerance: 106.806933ms
	I0205 03:31:04.926069   77491 start.go:83] releasing machines lock for "enable-default-cni-253147", held for 28.359642702s
	I0205 03:31:04.926086   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .DriverName
	I0205 03:31:04.926463   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetIP
	I0205 03:31:04.929123   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:04.929524   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:b1:0a", ip: ""} in network mk-enable-default-cni-253147: {Iface:virbr4 ExpiryTime:2025-02-05 04:30:52 +0000 UTC Type:0 Mac:52:54:00:f2:b1:0a Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:enable-default-cni-253147 Clientid:01:52:54:00:f2:b1:0a}
	I0205 03:31:04.929555   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined IP address 192.168.72.143 and MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:04.929752   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .DriverName
	I0205 03:31:04.930239   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .DriverName
	I0205 03:31:04.930427   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .DriverName
	I0205 03:31:04.930513   77491 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0205 03:31:04.930571   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHHostname
	I0205 03:31:04.930629   77491 ssh_runner.go:195] Run: cat /version.json
	I0205 03:31:04.930659   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHHostname
	I0205 03:31:04.933284   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:04.933605   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:04.933684   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:b1:0a", ip: ""} in network mk-enable-default-cni-253147: {Iface:virbr4 ExpiryTime:2025-02-05 04:30:52 +0000 UTC Type:0 Mac:52:54:00:f2:b1:0a Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:enable-default-cni-253147 Clientid:01:52:54:00:f2:b1:0a}
	I0205 03:31:04.933713   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined IP address 192.168.72.143 and MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:04.933856   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHPort
	I0205 03:31:04.933950   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:b1:0a", ip: ""} in network mk-enable-default-cni-253147: {Iface:virbr4 ExpiryTime:2025-02-05 04:30:52 +0000 UTC Type:0 Mac:52:54:00:f2:b1:0a Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:enable-default-cni-253147 Clientid:01:52:54:00:f2:b1:0a}
	I0205 03:31:04.933981   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined IP address 192.168.72.143 and MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:04.934048   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHKeyPath
	I0205 03:31:04.934118   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHPort
	I0205 03:31:04.934190   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHUsername
	I0205 03:31:04.934261   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHKeyPath
	I0205 03:31:04.934337   77491 sshutil.go:53] new ssh client: &{IP:192.168.72.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/enable-default-cni-253147/id_rsa Username:docker}
	I0205 03:31:04.934363   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHUsername
	I0205 03:31:04.934497   77491 sshutil.go:53] new ssh client: &{IP:192.168.72.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/enable-default-cni-253147/id_rsa Username:docker}
	I0205 03:31:05.038295   77491 ssh_runner.go:195] Run: systemctl --version
	I0205 03:31:05.046555   77491 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0205 03:31:05.202392   77491 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0205 03:31:05.208827   77491 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0205 03:31:05.208909   77491 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0205 03:31:05.224566   77491 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0205 03:31:05.224591   77491 start.go:495] detecting cgroup driver to use...
	I0205 03:31:05.224649   77491 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0205 03:31:05.241921   77491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0205 03:31:05.258520   77491 docker.go:217] disabling cri-docker service (if available) ...
	I0205 03:31:05.258573   77491 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0205 03:31:05.273000   77491 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0205 03:31:05.290764   77491 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0205 03:31:05.415930   77491 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0205 03:31:05.580644   77491 docker.go:233] disabling docker service ...
	I0205 03:31:05.580722   77491 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0205 03:31:05.597829   77491 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0205 03:31:05.612417   77491 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0205 03:31:05.738341   77491 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0205 03:31:05.858172   77491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0205 03:31:05.872230   77491 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0205 03:31:05.890687   77491 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0205 03:31:05.890756   77491 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:31:05.900897   77491 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0205 03:31:05.900964   77491 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:31:05.911101   77491 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:31:05.921279   77491 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:31:05.931261   77491 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0205 03:31:05.941601   77491 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:31:05.951767   77491 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:31:05.968682   77491 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:31:05.978994   77491 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0205 03:31:05.988162   77491 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0205 03:31:05.988245   77491 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0205 03:31:06.000942   77491 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0205 03:31:06.011697   77491 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0205 03:31:06.154853   77491 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0205 03:31:06.247780   77491 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0205 03:31:06.247855   77491 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0205 03:31:06.252823   77491 start.go:563] Will wait 60s for crictl version
	I0205 03:31:06.252885   77491 ssh_runner.go:195] Run: which crictl
	I0205 03:31:06.256583   77491 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0205 03:31:06.303233   77491 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0205 03:31:06.303322   77491 ssh_runner.go:195] Run: crio --version
	I0205 03:31:06.333252   77491 ssh_runner.go:195] Run: crio --version
	I0205 03:31:06.367100   77491 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0205 03:31:06.368354   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetIP
	I0205 03:31:06.371200   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:06.371574   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:b1:0a", ip: ""} in network mk-enable-default-cni-253147: {Iface:virbr4 ExpiryTime:2025-02-05 04:30:52 +0000 UTC Type:0 Mac:52:54:00:f2:b1:0a Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:enable-default-cni-253147 Clientid:01:52:54:00:f2:b1:0a}
	I0205 03:31:06.371610   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined IP address 192.168.72.143 and MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:06.371765   77491 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0205 03:31:06.375962   77491 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0205 03:31:06.388425   77491 kubeadm.go:883] updating cluster {Name:enable-default-cni-253147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:enable-default-cni-253147 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.72.143 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0205 03:31:06.388534   77491 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0205 03:31:06.388576   77491 ssh_runner.go:195] Run: sudo crictl images --output json
	I0205 03:31:06.424506   77491 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0205 03:31:06.424581   77491 ssh_runner.go:195] Run: which lz4
	I0205 03:31:06.428194   77491 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0205 03:31:06.432079   77491 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0205 03:31:06.432103   77491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0205 03:31:07.771961   77491 crio.go:462] duration metric: took 1.34378928s to copy over tarball
	I0205 03:31:07.772026   77491 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0205 03:31:10.180443   77491 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.408392093s)
	I0205 03:31:10.180471   77491 crio.go:469] duration metric: took 2.408486001s to extract the tarball
	I0205 03:31:10.180478   77491 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0205 03:31:10.217082   77491 ssh_runner.go:195] Run: sudo crictl images --output json
	I0205 03:31:10.257757   77491 crio.go:514] all images are preloaded for cri-o runtime.
	I0205 03:31:10.257783   77491 cache_images.go:84] Images are preloaded, skipping loading
	I0205 03:31:10.257791   77491 kubeadm.go:934] updating node { 192.168.72.143 8443 v1.32.1 crio true true} ...
	I0205 03:31:10.257900   77491 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=enable-default-cni-253147 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.143
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:enable-default-cni-253147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0205 03:31:10.257986   77491 ssh_runner.go:195] Run: crio config
	I0205 03:31:10.302669   77491 cni.go:84] Creating CNI manager for "bridge"
	I0205 03:31:10.302695   77491 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0205 03:31:10.302715   77491 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.143 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:enable-default-cni-253147 NodeName:enable-default-cni-253147 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.143"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.143 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0205 03:31:10.302854   77491 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.143
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "enable-default-cni-253147"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.143"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.143"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0205 03:31:10.302912   77491 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0205 03:31:10.313180   77491 binaries.go:44] Found k8s binaries, skipping transfer
	I0205 03:31:10.313239   77491 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0205 03:31:10.322875   77491 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0205 03:31:10.341622   77491 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0205 03:31:10.359218   77491 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I0205 03:31:10.375474   77491 ssh_runner.go:195] Run: grep 192.168.72.143	control-plane.minikube.internal$ /etc/hosts
	I0205 03:31:10.379017   77491 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.143	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0205 03:31:10.390336   77491 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0205 03:31:10.515776   77491 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0205 03:31:10.531260   77491 certs.go:68] Setting up /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147 for IP: 192.168.72.143
	I0205 03:31:10.531279   77491 certs.go:194] generating shared ca certs ...
	I0205 03:31:10.531295   77491 certs.go:226] acquiring lock for ca certs: {Name:mkca3cbb562bd1b517883c3b07fce8acc6cc9038 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:31:10.531463   77491 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20363-12788/.minikube/ca.key
	I0205 03:31:10.531521   77491 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20363-12788/.minikube/proxy-client-ca.key
	I0205 03:31:10.531535   77491 certs.go:256] generating profile certs ...
	I0205 03:31:10.531597   77491 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/client.key
	I0205 03:31:10.531615   77491 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/client.crt with IP's: []
	I0205 03:31:10.623511   77491 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/client.crt ...
	I0205 03:31:10.623541   77491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/client.crt: {Name:mkc3265782d36a38d39b00b5a3fdc16129a0a7f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:31:10.623733   77491 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/client.key ...
	I0205 03:31:10.623772   77491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/client.key: {Name:mk1e41e7e69153fbaadbab1473ba194abf87affa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:31:10.623886   77491 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/apiserver.key.6b6e9977
	I0205 03:31:10.623903   77491 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/apiserver.crt.6b6e9977 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.143]
	I0205 03:31:10.685642   77491 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/apiserver.crt.6b6e9977 ...
	I0205 03:31:10.685673   77491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/apiserver.crt.6b6e9977: {Name:mk431170a7432cb10eb1d7e8d1913a32a3b3e772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:31:10.685835   77491 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/apiserver.key.6b6e9977 ...
	I0205 03:31:10.685850   77491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/apiserver.key.6b6e9977: {Name:mk07b515b8a77874236f65f75d7ed92f1da27679 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:31:10.685926   77491 certs.go:381] copying /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/apiserver.crt.6b6e9977 -> /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/apiserver.crt
	I0205 03:31:10.685996   77491 certs.go:385] copying /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/apiserver.key.6b6e9977 -> /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/apiserver.key
	I0205 03:31:10.686048   77491 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/proxy-client.key
	I0205 03:31:10.686064   77491 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/proxy-client.crt with IP's: []
	I0205 03:31:10.757612   77491 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/proxy-client.crt ...
	I0205 03:31:10.757642   77491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/proxy-client.crt: {Name:mk6314fe40653bc406d9bc8936c93e134713ddb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:31:10.757802   77491 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/proxy-client.key ...
	I0205 03:31:10.757814   77491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/proxy-client.key: {Name:mk87a05e19c8c78ef3191ba32fe40e1269b304c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:31:10.758016   77491 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/19989.pem (1338 bytes)
	W0205 03:31:10.758054   77491 certs.go:480] ignoring /home/jenkins/minikube-integration/20363-12788/.minikube/certs/19989_empty.pem, impossibly tiny 0 bytes
	I0205 03:31:10.758061   77491 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca-key.pem (1679 bytes)
	I0205 03:31:10.758084   77491 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem (1082 bytes)
	I0205 03:31:10.758107   77491 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/cert.pem (1123 bytes)
	I0205 03:31:10.758128   77491 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/key.pem (1675 bytes)
	I0205 03:31:10.758199   77491 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/files/etc/ssl/certs/199892.pem (1708 bytes)
	I0205 03:31:10.758715   77491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0205 03:31:10.785626   77491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0205 03:31:10.814361   77491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0205 03:31:10.839358   77491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0205 03:31:10.863255   77491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0205 03:31:10.886722   77491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0205 03:31:10.910224   77491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0205 03:31:10.934083   77491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0205 03:31:10.957377   77491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0205 03:31:10.979896   77491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/certs/19989.pem --> /usr/share/ca-certificates/19989.pem (1338 bytes)
	I0205 03:31:11.001812   77491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/files/etc/ssl/certs/199892.pem --> /usr/share/ca-certificates/199892.pem (1708 bytes)
	I0205 03:31:11.024528   77491 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0205 03:31:11.039807   77491 ssh_runner.go:195] Run: openssl version
	I0205 03:31:11.045445   77491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/199892.pem && ln -fs /usr/share/ca-certificates/199892.pem /etc/ssl/certs/199892.pem"
	I0205 03:31:11.056090   77491 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/199892.pem
	I0205 03:31:11.060457   77491 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb  5 02:11 /usr/share/ca-certificates/199892.pem
	I0205 03:31:11.060511   77491 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/199892.pem
	I0205 03:31:11.066216   77491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/199892.pem /etc/ssl/certs/3ec20f2e.0"
	I0205 03:31:11.076504   77491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0205 03:31:11.086582   77491 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0205 03:31:11.090749   77491 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb  5 02:04 /usr/share/ca-certificates/minikubeCA.pem
	I0205 03:31:11.090797   77491 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0205 03:31:11.096301   77491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0205 03:31:11.106709   77491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19989.pem && ln -fs /usr/share/ca-certificates/19989.pem /etc/ssl/certs/19989.pem"
	I0205 03:31:11.116904   77491 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19989.pem
	I0205 03:31:11.121120   77491 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb  5 02:11 /usr/share/ca-certificates/19989.pem
	I0205 03:31:11.121168   77491 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19989.pem
	I0205 03:31:11.126651   77491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/19989.pem /etc/ssl/certs/51391683.0"
	I0205 03:31:11.137004   77491 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0205 03:31:11.140782   77491 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0205 03:31:11.140834   77491 kubeadm.go:392] StartCluster: {Name:enable-default-cni-253147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:enable-default-cni-253147 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.72.143 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0205 03:31:11.140918   77491 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0205 03:31:11.140985   77491 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0205 03:31:11.182034   77491 cri.go:89] found id: ""
	I0205 03:31:11.182097   77491 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0205 03:31:11.196119   77491 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0205 03:31:11.208693   77491 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0205 03:31:11.221618   77491 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0205 03:31:11.221642   77491 kubeadm.go:157] found existing configuration files:
	
	I0205 03:31:11.221695   77491 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0205 03:31:11.232681   77491 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0205 03:31:11.232757   77491 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0205 03:31:11.244988   77491 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0205 03:31:11.255084   77491 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0205 03:31:11.255154   77491 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0205 03:31:11.265175   77491 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0205 03:31:11.274139   77491 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0205 03:31:11.274209   77491 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0205 03:31:11.284042   77491 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0205 03:31:11.293097   77491 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0205 03:31:11.293171   77491 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0205 03:31:11.302601   77491 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0205 03:31:11.467478   77491 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0205 03:31:21.702323   77491 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0205 03:31:21.702417   77491 kubeadm.go:310] [preflight] Running pre-flight checks
	I0205 03:31:21.702511   77491 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0205 03:31:21.702610   77491 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0205 03:31:21.702720   77491 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0205 03:31:21.702787   77491 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0205 03:31:21.704300   77491 out.go:235]   - Generating certificates and keys ...
	I0205 03:31:21.704401   77491 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0205 03:31:21.704496   77491 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0205 03:31:21.704598   77491 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0205 03:31:21.704688   77491 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0205 03:31:21.704758   77491 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0205 03:31:21.704823   77491 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0205 03:31:21.704911   77491 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0205 03:31:21.705067   77491 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [enable-default-cni-253147 localhost] and IPs [192.168.72.143 127.0.0.1 ::1]
	I0205 03:31:21.705151   77491 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0205 03:31:21.705297   77491 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [enable-default-cni-253147 localhost] and IPs [192.168.72.143 127.0.0.1 ::1]
	I0205 03:31:21.705413   77491 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0205 03:31:21.705492   77491 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0205 03:31:21.705572   77491 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0205 03:31:21.705633   77491 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0205 03:31:21.705706   77491 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0205 03:31:21.705817   77491 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0205 03:31:21.705868   77491 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0205 03:31:21.705925   77491 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0205 03:31:21.705980   77491 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0205 03:31:21.706055   77491 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0205 03:31:21.706109   77491 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0205 03:31:21.707276   77491 out.go:235]   - Booting up control plane ...
	I0205 03:31:21.707358   77491 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0205 03:31:21.707425   77491 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0205 03:31:21.707498   77491 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0205 03:31:21.707603   77491 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0205 03:31:21.707692   77491 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0205 03:31:21.707727   77491 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0205 03:31:21.707833   77491 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0205 03:31:21.707923   77491 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0205 03:31:21.707972   77491 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.000944148s
	I0205 03:31:21.708055   77491 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0205 03:31:21.708123   77491 kubeadm.go:310] [api-check] The API server is healthy after 4.501976244s
	I0205 03:31:21.708226   77491 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0205 03:31:21.708331   77491 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0205 03:31:21.708379   77491 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0205 03:31:21.708542   77491 kubeadm.go:310] [mark-control-plane] Marking the node enable-default-cni-253147 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0205 03:31:21.708590   77491 kubeadm.go:310] [bootstrap-token] Using token: i9tybv.ko2zd8utm1qdci6y
	I0205 03:31:21.710478   77491 out.go:235]   - Configuring RBAC rules ...
	I0205 03:31:21.710598   77491 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0205 03:31:21.710675   77491 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0205 03:31:21.710812   77491 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0205 03:31:21.710919   77491 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0205 03:31:21.711048   77491 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0205 03:31:21.711122   77491 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0205 03:31:21.711219   77491 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0205 03:31:21.711255   77491 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0205 03:31:21.711295   77491 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0205 03:31:21.711302   77491 kubeadm.go:310] 
	I0205 03:31:21.711377   77491 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0205 03:31:21.711409   77491 kubeadm.go:310] 
	I0205 03:31:21.711493   77491 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0205 03:31:21.711500   77491 kubeadm.go:310] 
	I0205 03:31:21.711521   77491 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0205 03:31:21.711571   77491 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0205 03:31:21.711620   77491 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0205 03:31:21.711625   77491 kubeadm.go:310] 
	I0205 03:31:21.711669   77491 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0205 03:31:21.711675   77491 kubeadm.go:310] 
	I0205 03:31:21.711719   77491 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0205 03:31:21.711726   77491 kubeadm.go:310] 
	I0205 03:31:21.711768   77491 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0205 03:31:21.711835   77491 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0205 03:31:21.711892   77491 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0205 03:31:21.711898   77491 kubeadm.go:310] 
	I0205 03:31:21.711997   77491 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0205 03:31:21.712098   77491 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0205 03:31:21.712112   77491 kubeadm.go:310] 
	I0205 03:31:21.712230   77491 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token i9tybv.ko2zd8utm1qdci6y \
	I0205 03:31:21.712351   77491 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d4171302552bdc887375c8f8e3004ed013421bed3bb373f5cbe18dbecc68169 \
	I0205 03:31:21.712382   77491 kubeadm.go:310] 	--control-plane 
	I0205 03:31:21.712388   77491 kubeadm.go:310] 
	I0205 03:31:21.712509   77491 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0205 03:31:21.712520   77491 kubeadm.go:310] 
	I0205 03:31:21.712613   77491 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token i9tybv.ko2zd8utm1qdci6y \
	I0205 03:31:21.712734   77491 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d4171302552bdc887375c8f8e3004ed013421bed3bb373f5cbe18dbecc68169 
	I0205 03:31:21.712747   77491 cni.go:84] Creating CNI manager for "bridge"
	I0205 03:31:21.714829   77491 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0205 03:31:21.715847   77491 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0205 03:31:21.728548   77491 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0205 03:31:21.747916   77491 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0205 03:31:21.747973   77491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 03:31:21.747982   77491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes enable-default-cni-253147 minikube.k8s.io/updated_at=2025_02_05T03_31_21_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=e14c106b6cd223b80b8f7425af26284a33f75c3d minikube.k8s.io/name=enable-default-cni-253147 minikube.k8s.io/primary=true
	I0205 03:31:21.898784   77491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 03:31:21.898791   77491 ops.go:34] apiserver oom_adj: -16
	I0205 03:31:22.398898   77491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 03:31:22.899591   77491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 03:31:23.399781   77491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 03:31:23.899353   77491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 03:31:24.399184   77491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 03:31:24.899785   77491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 03:31:25.399192   77491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 03:31:25.489351   77491 kubeadm.go:1113] duration metric: took 3.741429787s to wait for elevateKubeSystemPrivileges
	I0205 03:31:25.489405   77491 kubeadm.go:394] duration metric: took 14.348568211s to StartCluster
	I0205 03:31:25.489435   77491 settings.go:142] acquiring lock: {Name:mk2eca847da5ba78f5b041a83e5cfcbdebb0c621 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:31:25.489532   77491 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20363-12788/kubeconfig
	I0205 03:31:25.490672   77491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/kubeconfig: {Name:mkb405c4292c681fd728af1af9684132d7e45754 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:31:25.490945   77491 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0205 03:31:25.490965   77491 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0205 03:31:25.490942   77491 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.143 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0205 03:31:25.491054   77491 addons.go:69] Setting storage-provisioner=true in profile "enable-default-cni-253147"
	I0205 03:31:25.491070   77491 addons.go:69] Setting default-storageclass=true in profile "enable-default-cni-253147"
	I0205 03:31:25.491098   77491 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "enable-default-cni-253147"
	I0205 03:31:25.491172   77491 config.go:182] Loaded profile config "enable-default-cni-253147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 03:31:25.491076   77491 addons.go:238] Setting addon storage-provisioner=true in "enable-default-cni-253147"
	I0205 03:31:25.491240   77491 host.go:66] Checking if "enable-default-cni-253147" exists ...
	I0205 03:31:25.491554   77491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:31:25.491580   77491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:31:25.491631   77491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:31:25.491695   77491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:31:25.493470   77491 out.go:177] * Verifying Kubernetes components...
	I0205 03:31:25.494653   77491 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0205 03:31:25.507196   77491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37971
	I0205 03:31:25.507733   77491 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:31:25.508225   77491 main.go:141] libmachine: Using API Version  1
	I0205 03:31:25.508251   77491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:31:25.508633   77491 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:31:25.508848   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetState
	I0205 03:31:25.511915   77491 addons.go:238] Setting addon default-storageclass=true in "enable-default-cni-253147"
	I0205 03:31:25.511958   77491 host.go:66] Checking if "enable-default-cni-253147" exists ...
	I0205 03:31:25.512271   77491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:31:25.512312   77491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:31:25.512520   77491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44659
	I0205 03:31:25.513046   77491 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:31:25.513629   77491 main.go:141] libmachine: Using API Version  1
	I0205 03:31:25.513651   77491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:31:25.514023   77491 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:31:25.514537   77491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:31:25.514584   77491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:31:25.528623   77491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39521
	I0205 03:31:25.529227   77491 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:31:25.529855   77491 main.go:141] libmachine: Using API Version  1
	I0205 03:31:25.529881   77491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:31:25.530277   77491 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:31:25.530867   77491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40795
	I0205 03:31:25.530979   77491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:31:25.531034   77491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:31:25.531264   77491 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:31:25.531698   77491 main.go:141] libmachine: Using API Version  1
	I0205 03:31:25.531717   77491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:31:25.532127   77491 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:31:25.532337   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetState
	I0205 03:31:25.534421   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .DriverName
	I0205 03:31:25.536616   77491 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0205 03:31:25.537918   77491 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0205 03:31:25.537937   77491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0205 03:31:25.537954   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHHostname
	I0205 03:31:25.541749   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:25.542282   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:b1:0a", ip: ""} in network mk-enable-default-cni-253147: {Iface:virbr4 ExpiryTime:2025-02-05 04:30:52 +0000 UTC Type:0 Mac:52:54:00:f2:b1:0a Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:enable-default-cni-253147 Clientid:01:52:54:00:f2:b1:0a}
	I0205 03:31:25.542311   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined IP address 192.168.72.143 and MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:25.542509   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHPort
	I0205 03:31:25.542722   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHKeyPath
	I0205 03:31:25.542909   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHUsername
	I0205 03:31:25.543162   77491 sshutil.go:53] new ssh client: &{IP:192.168.72.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/enable-default-cni-253147/id_rsa Username:docker}
	I0205 03:31:25.548839   77491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46559
	I0205 03:31:25.549236   77491 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:31:25.549759   77491 main.go:141] libmachine: Using API Version  1
	I0205 03:31:25.549789   77491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:31:25.550077   77491 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:31:25.550325   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetState
	I0205 03:31:25.551945   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .DriverName
	I0205 03:31:25.552161   77491 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0205 03:31:25.552177   77491 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0205 03:31:25.552196   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHHostname
	I0205 03:31:25.555135   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:25.555623   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:b1:0a", ip: ""} in network mk-enable-default-cni-253147: {Iface:virbr4 ExpiryTime:2025-02-05 04:30:52 +0000 UTC Type:0 Mac:52:54:00:f2:b1:0a Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:enable-default-cni-253147 Clientid:01:52:54:00:f2:b1:0a}
	I0205 03:31:25.555654   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined IP address 192.168.72.143 and MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:25.555813   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHPort
	I0205 03:31:25.556023   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHKeyPath
	I0205 03:31:25.556200   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHUsername
	I0205 03:31:25.556355   77491 sshutil.go:53] new ssh client: &{IP:192.168.72.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/enable-default-cni-253147/id_rsa Username:docker}
	I0205 03:31:25.658549   77491 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0205 03:31:25.658648   77491 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0205 03:31:25.794973   77491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0205 03:31:25.815347   77491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0205 03:31:26.247996   77491 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0205 03:31:26.249058   77491 node_ready.go:35] waiting up to 15m0s for node "enable-default-cni-253147" to be "Ready" ...
	I0205 03:31:26.259706   77491 node_ready.go:49] node "enable-default-cni-253147" has status "Ready":"True"
	I0205 03:31:26.259731   77491 node_ready.go:38] duration metric: took 10.646859ms for node "enable-default-cni-253147" to be "Ready" ...
	I0205 03:31:26.259743   77491 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0205 03:31:26.263978   77491 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-8nj85" in "kube-system" namespace to be "Ready" ...
	I0205 03:31:26.568098   77491 main.go:141] libmachine: Making call to close driver server
	I0205 03:31:26.568137   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .Close
	I0205 03:31:26.568142   77491 main.go:141] libmachine: Making call to close driver server
	I0205 03:31:26.568165   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .Close
	I0205 03:31:26.568478   77491 main.go:141] libmachine: Successfully made call to close driver server
	I0205 03:31:26.568508   77491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 03:31:26.568517   77491 main.go:141] libmachine: Making call to close driver server
	I0205 03:31:26.568508   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | Closing plugin on server side
	I0205 03:31:26.568481   77491 main.go:141] libmachine: Successfully made call to close driver server
	I0205 03:31:26.568578   77491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 03:31:26.568588   77491 main.go:141] libmachine: Making call to close driver server
	I0205 03:31:26.568601   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .Close
	I0205 03:31:26.568530   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .Close
	I0205 03:31:26.568917   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | Closing plugin on server side
	I0205 03:31:26.568925   77491 main.go:141] libmachine: Successfully made call to close driver server
	I0205 03:31:26.568984   77491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 03:31:26.568985   77491 main.go:141] libmachine: Successfully made call to close driver server
	I0205 03:31:26.569194   77491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 03:31:26.568944   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | Closing plugin on server side
	I0205 03:31:26.585395   77491 main.go:141] libmachine: Making call to close driver server
	I0205 03:31:26.585423   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .Close
	I0205 03:31:26.585742   77491 main.go:141] libmachine: Successfully made call to close driver server
	I0205 03:31:26.585761   77491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 03:31:26.585769   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | Closing plugin on server side
	I0205 03:31:26.587945   77491 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0205 03:31:26.589121   77491 addons.go:514] duration metric: took 1.098153606s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0205 03:31:26.753030   77491 kapi.go:214] "coredns" deployment in "kube-system" namespace and "enable-default-cni-253147" context rescaled to 1 replicas
	I0205 03:31:28.269500   77491 pod_ready.go:103] pod "coredns-668d6bf9bc-8nj85" in "kube-system" namespace has status "Ready":"False"
	I0205 03:31:30.275505   77491 pod_ready.go:103] pod "coredns-668d6bf9bc-8nj85" in "kube-system" namespace has status "Ready":"False"
	I0205 03:31:32.770529   77491 pod_ready.go:103] pod "coredns-668d6bf9bc-8nj85" in "kube-system" namespace has status "Ready":"False"
	I0205 03:31:35.270080   77491 pod_ready.go:103] pod "coredns-668d6bf9bc-8nj85" in "kube-system" namespace has status "Ready":"False"
	I0205 03:31:37.769811   77491 pod_ready.go:103] pod "coredns-668d6bf9bc-8nj85" in "kube-system" namespace has status "Ready":"False"
	I0205 03:31:40.270331   77491 pod_ready.go:103] pod "coredns-668d6bf9bc-8nj85" in "kube-system" namespace has status "Ready":"False"
	I0205 03:31:42.769745   77491 pod_ready.go:103] pod "coredns-668d6bf9bc-8nj85" in "kube-system" namespace has status "Ready":"False"
	I0205 03:31:44.770281   77491 pod_ready.go:103] pod "coredns-668d6bf9bc-8nj85" in "kube-system" namespace has status "Ready":"False"
	I0205 03:31:47.268926   77491 pod_ready.go:103] pod "coredns-668d6bf9bc-8nj85" in "kube-system" namespace has status "Ready":"False"
	I0205 03:31:49.270586   77491 pod_ready.go:103] pod "coredns-668d6bf9bc-8nj85" in "kube-system" namespace has status "Ready":"False"
	I0205 03:31:51.770521   77491 pod_ready.go:103] pod "coredns-668d6bf9bc-8nj85" in "kube-system" namespace has status "Ready":"False"
	I0205 03:31:54.269825   77491 pod_ready.go:103] pod "coredns-668d6bf9bc-8nj85" in "kube-system" namespace has status "Ready":"False"
	I0205 03:31:56.769127   77491 pod_ready.go:103] pod "coredns-668d6bf9bc-8nj85" in "kube-system" namespace has status "Ready":"False"
	I0205 03:31:58.771048   77491 pod_ready.go:103] pod "coredns-668d6bf9bc-8nj85" in "kube-system" namespace has status "Ready":"False"
	I0205 03:32:01.270244   77491 pod_ready.go:103] pod "coredns-668d6bf9bc-8nj85" in "kube-system" namespace has status "Ready":"False"
	I0205 03:32:03.769464   77491 pod_ready.go:93] pod "coredns-668d6bf9bc-8nj85" in "kube-system" namespace has status "Ready":"True"
	I0205 03:32:03.769488   77491 pod_ready.go:82] duration metric: took 37.505471392s for pod "coredns-668d6bf9bc-8nj85" in "kube-system" namespace to be "Ready" ...
	I0205 03:32:03.769500   77491 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-j5vpn" in "kube-system" namespace to be "Ready" ...
	I0205 03:32:03.771193   77491 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-j5vpn" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-j5vpn" not found
	I0205 03:32:03.771213   77491 pod_ready.go:82] duration metric: took 1.707852ms for pod "coredns-668d6bf9bc-j5vpn" in "kube-system" namespace to be "Ready" ...
	E0205 03:32:03.771222   77491 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-j5vpn" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-j5vpn" not found
	I0205 03:32:03.771230   77491 pod_ready.go:79] waiting up to 15m0s for pod "etcd-enable-default-cni-253147" in "kube-system" namespace to be "Ready" ...
	I0205 03:32:03.775304   77491 pod_ready.go:93] pod "etcd-enable-default-cni-253147" in "kube-system" namespace has status "Ready":"True"
	I0205 03:32:03.775324   77491 pod_ready.go:82] duration metric: took 4.087134ms for pod "etcd-enable-default-cni-253147" in "kube-system" namespace to be "Ready" ...
	I0205 03:32:03.775335   77491 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-enable-default-cni-253147" in "kube-system" namespace to be "Ready" ...
	I0205 03:32:03.778837   77491 pod_ready.go:93] pod "kube-apiserver-enable-default-cni-253147" in "kube-system" namespace has status "Ready":"True"
	I0205 03:32:03.778853   77491 pod_ready.go:82] duration metric: took 3.511148ms for pod "kube-apiserver-enable-default-cni-253147" in "kube-system" namespace to be "Ready" ...
	I0205 03:32:03.778864   77491 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-enable-default-cni-253147" in "kube-system" namespace to be "Ready" ...
	I0205 03:32:03.782399   77491 pod_ready.go:93] pod "kube-controller-manager-enable-default-cni-253147" in "kube-system" namespace has status "Ready":"True"
	I0205 03:32:03.782416   77491 pod_ready.go:82] duration metric: took 3.544891ms for pod "kube-controller-manager-enable-default-cni-253147" in "kube-system" namespace to be "Ready" ...
	I0205 03:32:03.782425   77491 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-56g74" in "kube-system" namespace to be "Ready" ...
	I0205 03:32:03.967978   77491 pod_ready.go:93] pod "kube-proxy-56g74" in "kube-system" namespace has status "Ready":"True"
	I0205 03:32:03.968003   77491 pod_ready.go:82] duration metric: took 185.571014ms for pod "kube-proxy-56g74" in "kube-system" namespace to be "Ready" ...
	I0205 03:32:03.968013   77491 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-enable-default-cni-253147" in "kube-system" namespace to be "Ready" ...
	I0205 03:32:04.368643   77491 pod_ready.go:93] pod "kube-scheduler-enable-default-cni-253147" in "kube-system" namespace has status "Ready":"True"
	I0205 03:32:04.368669   77491 pod_ready.go:82] duration metric: took 400.649646ms for pod "kube-scheduler-enable-default-cni-253147" in "kube-system" namespace to be "Ready" ...
	I0205 03:32:04.368679   77491 pod_ready.go:39] duration metric: took 38.10892276s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0205 03:32:04.368698   77491 api_server.go:52] waiting for apiserver process to appear ...
	I0205 03:32:04.368762   77491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:32:04.384644   77491 api_server.go:72] duration metric: took 38.893584005s to wait for apiserver process to appear ...
	I0205 03:32:04.384671   77491 api_server.go:88] waiting for apiserver healthz status ...
	I0205 03:32:04.384688   77491 api_server.go:253] Checking apiserver healthz at https://192.168.72.143:8443/healthz ...
	I0205 03:32:04.389020   77491 api_server.go:279] https://192.168.72.143:8443/healthz returned 200:
	ok
	I0205 03:32:04.389953   77491 api_server.go:141] control plane version: v1.32.1
	I0205 03:32:04.389976   77491 api_server.go:131] duration metric: took 5.299568ms to wait for apiserver health ...
	I0205 03:32:04.389984   77491 system_pods.go:43] waiting for kube-system pods to appear ...
	I0205 03:32:04.569218   77491 system_pods.go:59] 7 kube-system pods found
	I0205 03:32:04.569254   77491 system_pods.go:61] "coredns-668d6bf9bc-8nj85" [35e45a5a-0f36-4b67-9c91-ba4b0436f156] Running
	I0205 03:32:04.569262   77491 system_pods.go:61] "etcd-enable-default-cni-253147" [b2e06ccf-3909-417f-9ed0-ce47bd790bde] Running
	I0205 03:32:04.569267   77491 system_pods.go:61] "kube-apiserver-enable-default-cni-253147" [0463ffc5-a722-4df5-9885-1db3f7c8e89f] Running
	I0205 03:32:04.569274   77491 system_pods.go:61] "kube-controller-manager-enable-default-cni-253147" [38a85a6f-c6a1-473b-bd05-2d64be2f8c52] Running
	I0205 03:32:04.569279   77491 system_pods.go:61] "kube-proxy-56g74" [fa42b842-56ce-4965-9822-f28a774ab641] Running
	I0205 03:32:04.569284   77491 system_pods.go:61] "kube-scheduler-enable-default-cni-253147" [2fccc41d-e339-4ea1-a296-be7befb819fb] Running
	I0205 03:32:04.569289   77491 system_pods.go:61] "storage-provisioner" [45a5c96f-ab44-4fc3-81c0-b4f8208b1973] Running
	I0205 03:32:04.569296   77491 system_pods.go:74] duration metric: took 179.306382ms to wait for pod list to return data ...
	I0205 03:32:04.569304   77491 default_sa.go:34] waiting for default service account to be created ...
	I0205 03:32:04.768437   77491 default_sa.go:45] found service account: "default"
	I0205 03:32:04.768463   77491 default_sa.go:55] duration metric: took 199.153251ms for default service account to be created ...
	I0205 03:32:04.768472   77491 system_pods.go:116] waiting for k8s-apps to be running ...
	I0205 03:32:04.968947   77491 system_pods.go:86] 7 kube-system pods found
	I0205 03:32:04.968976   77491 system_pods.go:89] "coredns-668d6bf9bc-8nj85" [35e45a5a-0f36-4b67-9c91-ba4b0436f156] Running
	I0205 03:32:04.968982   77491 system_pods.go:89] "etcd-enable-default-cni-253147" [b2e06ccf-3909-417f-9ed0-ce47bd790bde] Running
	I0205 03:32:04.968986   77491 system_pods.go:89] "kube-apiserver-enable-default-cni-253147" [0463ffc5-a722-4df5-9885-1db3f7c8e89f] Running
	I0205 03:32:04.968990   77491 system_pods.go:89] "kube-controller-manager-enable-default-cni-253147" [38a85a6f-c6a1-473b-bd05-2d64be2f8c52] Running
	I0205 03:32:04.968994   77491 system_pods.go:89] "kube-proxy-56g74" [fa42b842-56ce-4965-9822-f28a774ab641] Running
	I0205 03:32:04.968997   77491 system_pods.go:89] "kube-scheduler-enable-default-cni-253147" [2fccc41d-e339-4ea1-a296-be7befb819fb] Running
	I0205 03:32:04.969001   77491 system_pods.go:89] "storage-provisioner" [45a5c96f-ab44-4fc3-81c0-b4f8208b1973] Running
	I0205 03:32:04.969010   77491 system_pods.go:126] duration metric: took 200.530558ms to wait for k8s-apps to be running ...
	I0205 03:32:04.969017   77491 system_svc.go:44] waiting for kubelet service to be running ....
	I0205 03:32:04.969060   77491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0205 03:32:04.983667   77491 system_svc.go:56] duration metric: took 14.639753ms WaitForService to wait for kubelet
	I0205 03:32:04.983693   77491 kubeadm.go:582] duration metric: took 39.492639559s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0205 03:32:04.983710   77491 node_conditions.go:102] verifying NodePressure condition ...
	I0205 03:32:05.168037   77491 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0205 03:32:05.168063   77491 node_conditions.go:123] node cpu capacity is 2
	I0205 03:32:05.168077   77491 node_conditions.go:105] duration metric: took 184.363284ms to run NodePressure ...
	I0205 03:32:05.168088   77491 start.go:241] waiting for startup goroutines ...
	I0205 03:32:05.168094   77491 start.go:246] waiting for cluster config update ...
	I0205 03:32:05.168103   77491 start.go:255] writing updated cluster config ...
	I0205 03:32:05.168381   77491 ssh_runner.go:195] Run: rm -f paused
	I0205 03:32:05.216322   77491 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0205 03:32:05.218842   77491 out.go:177] * Done! kubectl is now configured to use "enable-default-cni-253147" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 05 03:35:33 old-k8s-version-191773 crio[628]: time="2025-02-05 03:35:33.427532602Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738726533427498550,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=648a6d33-7499-4601-aff3-63bb4f89f0ce name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 03:35:33 old-k8s-version-191773 crio[628]: time="2025-02-05 03:35:33.428000600Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2dbafc9e-3d48-4e21-a5e1-6a2c1499d635 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:35:33 old-k8s-version-191773 crio[628]: time="2025-02-05 03:35:33.428043698Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2dbafc9e-3d48-4e21-a5e1-6a2c1499d635 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:35:33 old-k8s-version-191773 crio[628]: time="2025-02-05 03:35:33.428079024Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=2dbafc9e-3d48-4e21-a5e1-6a2c1499d635 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:35:33 old-k8s-version-191773 crio[628]: time="2025-02-05 03:35:33.458150801Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6275c0e7-a839-4d40-a133-9914e9cbb8f9 name=/runtime.v1.RuntimeService/Version
	Feb 05 03:35:33 old-k8s-version-191773 crio[628]: time="2025-02-05 03:35:33.458248604Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6275c0e7-a839-4d40-a133-9914e9cbb8f9 name=/runtime.v1.RuntimeService/Version
	Feb 05 03:35:33 old-k8s-version-191773 crio[628]: time="2025-02-05 03:35:33.459225703Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9d94209a-a72c-469b-bbdb-5d91539c6ba2 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 03:35:33 old-k8s-version-191773 crio[628]: time="2025-02-05 03:35:33.459596940Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738726533459576894,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9d94209a-a72c-469b-bbdb-5d91539c6ba2 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 03:35:33 old-k8s-version-191773 crio[628]: time="2025-02-05 03:35:33.460177876Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6f62410e-6c47-4cd3-8aa7-b89e351d0d19 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:35:33 old-k8s-version-191773 crio[628]: time="2025-02-05 03:35:33.460245119Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6f62410e-6c47-4cd3-8aa7-b89e351d0d19 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:35:33 old-k8s-version-191773 crio[628]: time="2025-02-05 03:35:33.460276599Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6f62410e-6c47-4cd3-8aa7-b89e351d0d19 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:35:33 old-k8s-version-191773 crio[628]: time="2025-02-05 03:35:33.490488129Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=31e46e32-08b6-488f-bc4d-b01394faa619 name=/runtime.v1.RuntimeService/Version
	Feb 05 03:35:33 old-k8s-version-191773 crio[628]: time="2025-02-05 03:35:33.490559332Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=31e46e32-08b6-488f-bc4d-b01394faa619 name=/runtime.v1.RuntimeService/Version
	Feb 05 03:35:33 old-k8s-version-191773 crio[628]: time="2025-02-05 03:35:33.492002605Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f0230eb7-dc93-4f3d-93d3-c3f247c5f2d6 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 03:35:33 old-k8s-version-191773 crio[628]: time="2025-02-05 03:35:33.492403236Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738726533492378724,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f0230eb7-dc93-4f3d-93d3-c3f247c5f2d6 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 03:35:33 old-k8s-version-191773 crio[628]: time="2025-02-05 03:35:33.493119676Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5499b390-f020-4fbc-a41f-170446807afe name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:35:33 old-k8s-version-191773 crio[628]: time="2025-02-05 03:35:33.493214023Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5499b390-f020-4fbc-a41f-170446807afe name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:35:33 old-k8s-version-191773 crio[628]: time="2025-02-05 03:35:33.493259284Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=5499b390-f020-4fbc-a41f-170446807afe name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:35:33 old-k8s-version-191773 crio[628]: time="2025-02-05 03:35:33.523255293Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c4545cfe-4b09-43e7-a399-769599182d8a name=/runtime.v1.RuntimeService/Version
	Feb 05 03:35:33 old-k8s-version-191773 crio[628]: time="2025-02-05 03:35:33.523345937Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c4545cfe-4b09-43e7-a399-769599182d8a name=/runtime.v1.RuntimeService/Version
	Feb 05 03:35:33 old-k8s-version-191773 crio[628]: time="2025-02-05 03:35:33.524389200Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=47a0517d-591b-4948-ab12-ec98d802fad5 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 03:35:33 old-k8s-version-191773 crio[628]: time="2025-02-05 03:35:33.524801759Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738726533524726120,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=47a0517d-591b-4948-ab12-ec98d802fad5 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 03:35:33 old-k8s-version-191773 crio[628]: time="2025-02-05 03:35:33.525333064Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=723a07dc-fadc-4a0e-800a-649066c59fd0 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:35:33 old-k8s-version-191773 crio[628]: time="2025-02-05 03:35:33.525384139Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=723a07dc-fadc-4a0e-800a-649066c59fd0 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:35:33 old-k8s-version-191773 crio[628]: time="2025-02-05 03:35:33.525421079Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=723a07dc-fadc-4a0e-800a-649066c59fd0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Feb 5 03:18] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053905] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040287] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.006200] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.096613] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.500084] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.640447] systemd-fstab-generator[555]: Ignoring "noauto" option for root device
	[  +0.062173] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064592] systemd-fstab-generator[567]: Ignoring "noauto" option for root device
	[  +0.179648] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.107931] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.224718] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +6.148854] systemd-fstab-generator[875]: Ignoring "noauto" option for root device
	[  +0.061424] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.976913] systemd-fstab-generator[1000]: Ignoring "noauto" option for root device
	[ +13.604699] kauditd_printk_skb: 46 callbacks suppressed
	[Feb 5 03:22] systemd-fstab-generator[5042]: Ignoring "noauto" option for root device
	[Feb 5 03:24] systemd-fstab-generator[5320]: Ignoring "noauto" option for root device
	[  +0.067795] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 03:35:33 up 17 min,  0 users,  load average: 0.00, 0.02, 0.04
	Linux old-k8s-version-191773 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Feb 05 03:35:31 old-k8s-version-191773 kubelet[6491]: net/http.(*Transport).dialConnFor(0xc00066a000, 0xc000c8efd0)
	Feb 05 03:35:31 old-k8s-version-191773 kubelet[6491]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Feb 05 03:35:31 old-k8s-version-191773 kubelet[6491]: created by net/http.(*Transport).queueForDial
	Feb 05 03:35:31 old-k8s-version-191773 kubelet[6491]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Feb 05 03:35:31 old-k8s-version-191773 kubelet[6491]: goroutine 129 [syscall]:
	Feb 05 03:35:31 old-k8s-version-191773 kubelet[6491]: syscall.Syscall6(0xe8, 0xd, 0xc0007c1b6c, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0, 0x0, 0x0)
	Feb 05 03:35:31 old-k8s-version-191773 kubelet[6491]:         /usr/local/go/src/syscall/asm_linux_amd64.s:41 +0x5
	Feb 05 03:35:31 old-k8s-version-191773 kubelet[6491]: k8s.io/kubernetes/vendor/golang.org/x/sys/unix.EpollWait(0xd, 0xc0007c1b6c, 0x7, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0)
	Feb 05 03:35:31 old-k8s-version-191773 kubelet[6491]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/sys/unix/zsyscall_linux_amd64.go:76 +0x72
	Feb 05 03:35:31 old-k8s-version-191773 kubelet[6491]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*fdPoller).wait(0xc000bf4140, 0x0, 0x0, 0x0)
	Feb 05 03:35:31 old-k8s-version-191773 kubelet[6491]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify_poller.go:86 +0x91
	Feb 05 03:35:31 old-k8s-version-191773 kubelet[6491]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*Watcher).readEvents(0xc00077eb90)
	Feb 05 03:35:31 old-k8s-version-191773 kubelet[6491]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:192 +0x206
	Feb 05 03:35:31 old-k8s-version-191773 kubelet[6491]: created by k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.NewWatcher
	Feb 05 03:35:31 old-k8s-version-191773 kubelet[6491]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:59 +0x1a8
	Feb 05 03:35:31 old-k8s-version-191773 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 05 03:35:31 old-k8s-version-191773 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 05 03:35:32 old-k8s-version-191773 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Feb 05 03:35:32 old-k8s-version-191773 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 05 03:35:32 old-k8s-version-191773 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 05 03:35:32 old-k8s-version-191773 kubelet[6500]: I0205 03:35:32.513570    6500 server.go:416] Version: v1.20.0
	Feb 05 03:35:32 old-k8s-version-191773 kubelet[6500]: I0205 03:35:32.513911    6500 server.go:837] Client rotation is on, will bootstrap in background
	Feb 05 03:35:32 old-k8s-version-191773 kubelet[6500]: I0205 03:35:32.515902    6500 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 05 03:35:32 old-k8s-version-191773 kubelet[6500]: I0205 03:35:32.516788    6500 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Feb 05 03:35:32 old-k8s-version-191773 kubelet[6500]: W0205 03:35:32.516985    6500 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-191773 -n old-k8s-version-191773
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-191773 -n old-k8s-version-191773: exit status 2 (224.106978ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-191773" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (367.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:35:36.914065   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/default-k8s-diff-port-568677/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:35:42.757454   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/calico-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:35:47.031793   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/flannel-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:36:04.764850   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:36:05.414632   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/custom-flannel-253147/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:36:05.753481   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:36:05.759867   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:36:05.771228   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:36:05.792703   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:36:05.834144   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:36:05.915548   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:36:06.077066   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:36:06.398766   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:36:07.040777   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:36:07.367099   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kindnet-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:36:08.322062   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:36:10.883931   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:36:16.005659   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:36:26.247042   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:36:27.993087   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/flannel-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:36:35.069046   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kindnet-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:36:46.729205   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:37:05.732979   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:37:05.739339   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:37:05.750735   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:37:05.772196   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:37:05.813625   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:37:05.895095   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:37:06.056729   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:37:06.378064   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:37:07.020085   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:37:08.301475   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:37:10.863277   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:37:15.985239   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:37:26.226869   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:37:27.336065   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/custom-flannel-253147/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:37:27.691066   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:37:46.708719   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:37:49.914869   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/flannel-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:37:53.054850   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/default-k8s-diff-port-568677/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:37:58.896877   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/calico-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:38:20.755745   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/default-k8s-diff-port-568677/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:38:26.599224   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/calico-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:38:27.670561   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:38:49.612572   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:39:09.273641   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/functional-910650/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:39:10.654924   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/no-preload-788344/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:39:40.947503   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/auto-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:39:43.475632   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/custom-flannel-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:39:49.592054   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:40:06.055018   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/flannel-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:40:11.177728   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/custom-flannel-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:40:33.717291   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/no-preload-788344/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:40:33.756818   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/flannel-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:41:04.764842   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:41:05.753202   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:41:07.366722   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/kindnet-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
E0205 03:41:33.453940   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.74:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.74:8443: connect: connection refused
start_stop_delete_test.go:285: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-191773 -n old-k8s-version-191773
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-191773 -n old-k8s-version-191773: exit status 2 (206.969411ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "old-k8s-version-191773" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-191773 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context old-k8s-version-191773 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.626µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-191773 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-191773 -n old-k8s-version-191773
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-191773 -n old-k8s-version-191773: exit status 2 (204.623507ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-191773 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-253147                         | enable-default-cni-253147 | jenkins | v1.35.0 | 05 Feb 25 03:32 UTC | 05 Feb 25 03:32 UTC |
	|         | sudo iptables -t nat -L -n -v                        |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-253147                         | enable-default-cni-253147 | jenkins | v1.35.0 | 05 Feb 25 03:32 UTC | 05 Feb 25 03:32 UTC |
	|         | sudo systemctl status kubelet                        |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-253147                         | enable-default-cni-253147 | jenkins | v1.35.0 | 05 Feb 25 03:32 UTC | 05 Feb 25 03:32 UTC |
	|         | sudo systemctl cat kubelet                           |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-253147                         | enable-default-cni-253147 | jenkins | v1.35.0 | 05 Feb 25 03:32 UTC | 05 Feb 25 03:32 UTC |
	|         | sudo journalctl -xeu kubelet                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-253147                         | enable-default-cni-253147 | jenkins | v1.35.0 | 05 Feb 25 03:32 UTC | 05 Feb 25 03:32 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-253147                         | enable-default-cni-253147 | jenkins | v1.35.0 | 05 Feb 25 03:32 UTC | 05 Feb 25 03:32 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-253147                         | enable-default-cni-253147 | jenkins | v1.35.0 | 05 Feb 25 03:32 UTC |                     |
	|         | sudo systemctl status docker                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-253147                         | enable-default-cni-253147 | jenkins | v1.35.0 | 05 Feb 25 03:32 UTC | 05 Feb 25 03:32 UTC |
	|         | sudo systemctl cat docker                            |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-253147                         | enable-default-cni-253147 | jenkins | v1.35.0 | 05 Feb 25 03:32 UTC | 05 Feb 25 03:32 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-253147                         | enable-default-cni-253147 | jenkins | v1.35.0 | 05 Feb 25 03:32 UTC |                     |
	|         | sudo docker system info                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-253147                         | enable-default-cni-253147 | jenkins | v1.35.0 | 05 Feb 25 03:32 UTC |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | cri-docker --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-253147                         | enable-default-cni-253147 | jenkins | v1.35.0 | 05 Feb 25 03:32 UTC | 05 Feb 25 03:32 UTC |
	|         | sudo systemctl cat cri-docker                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-253147 sudo cat                | enable-default-cni-253147 | jenkins | v1.35.0 | 05 Feb 25 03:32 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-253147 sudo cat                | enable-default-cni-253147 | jenkins | v1.35.0 | 05 Feb 25 03:32 UTC | 05 Feb 25 03:32 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-253147                         | enable-default-cni-253147 | jenkins | v1.35.0 | 05 Feb 25 03:32 UTC | 05 Feb 25 03:32 UTC |
	|         | sudo cri-dockerd --version                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-253147                         | enable-default-cni-253147 | jenkins | v1.35.0 | 05 Feb 25 03:32 UTC |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | containerd --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-253147                         | enable-default-cni-253147 | jenkins | v1.35.0 | 05 Feb 25 03:32 UTC | 05 Feb 25 03:32 UTC |
	|         | sudo systemctl cat containerd                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-253147 sudo cat                | enable-default-cni-253147 | jenkins | v1.35.0 | 05 Feb 25 03:32 UTC | 05 Feb 25 03:32 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-253147                         | enable-default-cni-253147 | jenkins | v1.35.0 | 05 Feb 25 03:32 UTC | 05 Feb 25 03:32 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-253147                         | enable-default-cni-253147 | jenkins | v1.35.0 | 05 Feb 25 03:32 UTC | 05 Feb 25 03:32 UTC |
	|         | sudo containerd config dump                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-253147                         | enable-default-cni-253147 | jenkins | v1.35.0 | 05 Feb 25 03:32 UTC | 05 Feb 25 03:32 UTC |
	|         | sudo systemctl status crio                           |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-253147                         | enable-default-cni-253147 | jenkins | v1.35.0 | 05 Feb 25 03:32 UTC | 05 Feb 25 03:32 UTC |
	|         | sudo systemctl cat crio                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-253147                         | enable-default-cni-253147 | jenkins | v1.35.0 | 05 Feb 25 03:32 UTC | 05 Feb 25 03:32 UTC |
	|         | sudo find /etc/crio -type f                          |                           |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                        |                           |         |         |                     |                     |
	|         | \;                                                   |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-253147                         | enable-default-cni-253147 | jenkins | v1.35.0 | 05 Feb 25 03:32 UTC | 05 Feb 25 03:32 UTC |
	|         | sudo crio config                                     |                           |         |         |                     |                     |
	| delete  | -p enable-default-cni-253147                         | enable-default-cni-253147 | jenkins | v1.35.0 | 05 Feb 25 03:32 UTC | 05 Feb 25 03:32 UTC |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/05 03:30:13
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0205 03:30:13.058904   77491 out.go:345] Setting OutFile to fd 1 ...
	I0205 03:30:13.059041   77491 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 03:30:13.059052   77491 out.go:358] Setting ErrFile to fd 2...
	I0205 03:30:13.059059   77491 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 03:30:13.059250   77491 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20363-12788/.minikube/bin
	I0205 03:30:13.059842   77491 out.go:352] Setting JSON to false
	I0205 03:30:13.060925   77491 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":7964,"bootTime":1738718249,"procs":313,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0205 03:30:13.061015   77491 start.go:139] virtualization: kvm guest
	I0205 03:30:13.062790   77491 out.go:177] * [enable-default-cni-253147] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0205 03:30:13.064298   77491 out.go:177]   - MINIKUBE_LOCATION=20363
	I0205 03:30:13.064304   77491 notify.go:220] Checking for updates...
	I0205 03:30:13.066361   77491 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0205 03:30:13.067427   77491 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20363-12788/kubeconfig
	I0205 03:30:13.068416   77491 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20363-12788/.minikube
	I0205 03:30:13.069475   77491 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0205 03:30:13.070547   77491 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0205 03:30:13.071902   77491 config.go:182] Loaded profile config "bridge-253147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 03:30:13.072016   77491 config.go:182] Loaded profile config "flannel-253147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 03:30:13.072156   77491 config.go:182] Loaded profile config "old-k8s-version-191773": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0205 03:30:13.072247   77491 driver.go:394] Setting default libvirt URI to qemu:///system
	I0205 03:30:13.948554   77491 out.go:177] * Using the kvm2 driver based on user configuration
	I0205 03:30:13.949679   77491 start.go:297] selected driver: kvm2
	I0205 03:30:13.949696   77491 start.go:901] validating driver "kvm2" against <nil>
	I0205 03:30:13.949707   77491 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0205 03:30:13.950427   77491 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0205 03:30:13.950526   77491 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20363-12788/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0205 03:30:13.968041   77491 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0205 03:30:13.968159   77491 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	E0205 03:30:13.968502   77491 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0205 03:30:13.968542   77491 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0205 03:30:13.968583   77491 cni.go:84] Creating CNI manager for "bridge"
	I0205 03:30:13.968591   77491 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0205 03:30:13.968684   77491 start.go:340] cluster config:
	{Name:enable-default-cni-253147 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:enable-default-cni-253147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0205 03:30:13.968829   77491 iso.go:125] acquiring lock: {Name:mk486603e8d6546d81b9e7d0a893261360630790 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0205 03:30:13.970557   77491 out.go:177] * Starting "enable-default-cni-253147" primary control-plane node in "enable-default-cni-253147" cluster
	I0205 03:30:11.792298   77242 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0205 03:30:11.792433   77242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:30:11.792476   77242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:30:11.807003   77242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40163
	I0205 03:30:11.807471   77242 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:30:11.807993   77242 main.go:141] libmachine: Using API Version  1
	I0205 03:30:11.808016   77242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:30:11.808363   77242 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:30:11.808572   77242 main.go:141] libmachine: (bridge-253147) Calling .GetMachineName
	I0205 03:30:11.808746   77242 main.go:141] libmachine: (bridge-253147) Calling .DriverName
	I0205 03:30:11.808892   77242 start.go:159] libmachine.API.Create for "bridge-253147" (driver="kvm2")
	I0205 03:30:11.808924   77242 client.go:168] LocalClient.Create starting
	I0205 03:30:11.808967   77242 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem
	I0205 03:30:11.809004   77242 main.go:141] libmachine: Decoding PEM data...
	I0205 03:30:11.809019   77242 main.go:141] libmachine: Parsing certificate...
	I0205 03:30:11.809087   77242 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20363-12788/.minikube/certs/cert.pem
	I0205 03:30:11.809110   77242 main.go:141] libmachine: Decoding PEM data...
	I0205 03:30:11.809127   77242 main.go:141] libmachine: Parsing certificate...
	I0205 03:30:11.809160   77242 main.go:141] libmachine: Running pre-create checks...
	I0205 03:30:11.809172   77242 main.go:141] libmachine: (bridge-253147) Calling .PreCreateCheck
	I0205 03:30:11.809522   77242 main.go:141] libmachine: (bridge-253147) Calling .GetConfigRaw
	I0205 03:30:11.809936   77242 main.go:141] libmachine: Creating machine...
	I0205 03:30:11.809948   77242 main.go:141] libmachine: (bridge-253147) Calling .Create
	I0205 03:30:11.810068   77242 main.go:141] libmachine: (bridge-253147) creating KVM machine...
	I0205 03:30:11.810087   77242 main.go:141] libmachine: (bridge-253147) creating network...
	I0205 03:30:11.812574   77242 main.go:141] libmachine: (bridge-253147) DBG | found existing default KVM network
	I0205 03:30:11.938961   77242 main.go:141] libmachine: (bridge-253147) DBG | I0205 03:30:11.938763   77289 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:e6:1a:05} reservation:<nil>}
	I0205 03:30:11.939948   77242 main.go:141] libmachine: (bridge-253147) DBG | I0205 03:30:11.939863   77289 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00027c820}
	I0205 03:30:11.939991   77242 main.go:141] libmachine: (bridge-253147) DBG | created network xml: 
	I0205 03:30:11.940010   77242 main.go:141] libmachine: (bridge-253147) DBG | <network>
	I0205 03:30:11.940020   77242 main.go:141] libmachine: (bridge-253147) DBG |   <name>mk-bridge-253147</name>
	I0205 03:30:11.940025   77242 main.go:141] libmachine: (bridge-253147) DBG |   <dns enable='no'/>
	I0205 03:30:11.940030   77242 main.go:141] libmachine: (bridge-253147) DBG |   
	I0205 03:30:11.940038   77242 main.go:141] libmachine: (bridge-253147) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0205 03:30:11.940043   77242 main.go:141] libmachine: (bridge-253147) DBG |     <dhcp>
	I0205 03:30:11.940049   77242 main.go:141] libmachine: (bridge-253147) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0205 03:30:11.940060   77242 main.go:141] libmachine: (bridge-253147) DBG |     </dhcp>
	I0205 03:30:11.940064   77242 main.go:141] libmachine: (bridge-253147) DBG |   </ip>
	I0205 03:30:11.940068   77242 main.go:141] libmachine: (bridge-253147) DBG |   
	I0205 03:30:11.940072   77242 main.go:141] libmachine: (bridge-253147) DBG | </network>
	I0205 03:30:11.940079   77242 main.go:141] libmachine: (bridge-253147) DBG | 
	I0205 03:30:12.489028   77242 main.go:141] libmachine: (bridge-253147) DBG | trying to create private KVM network mk-bridge-253147 192.168.50.0/24...
	I0205 03:30:12.565957   77242 main.go:141] libmachine: (bridge-253147) setting up store path in /home/jenkins/minikube-integration/20363-12788/.minikube/machines/bridge-253147 ...
	I0205 03:30:12.565984   77242 main.go:141] libmachine: (bridge-253147) building disk image from file:///home/jenkins/minikube-integration/20363-12788/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0205 03:30:12.565995   77242 main.go:141] libmachine: (bridge-253147) DBG | private KVM network mk-bridge-253147 192.168.50.0/24 created
	I0205 03:30:12.566012   77242 main.go:141] libmachine: (bridge-253147) DBG | I0205 03:30:12.565893   77289 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20363-12788/.minikube
	I0205 03:30:12.566051   77242 main.go:141] libmachine: (bridge-253147) Downloading /home/jenkins/minikube-integration/20363-12788/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20363-12788/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0205 03:30:12.855158   77242 main.go:141] libmachine: (bridge-253147) DBG | I0205 03:30:12.855017   77289 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20363-12788/.minikube/machines/bridge-253147/id_rsa...
	I0205 03:30:12.960059   77242 main.go:141] libmachine: (bridge-253147) DBG | I0205 03:30:12.959920   77289 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20363-12788/.minikube/machines/bridge-253147/bridge-253147.rawdisk...
	I0205 03:30:12.960097   77242 main.go:141] libmachine: (bridge-253147) DBG | Writing magic tar header
	I0205 03:30:12.960113   77242 main.go:141] libmachine: (bridge-253147) DBG | Writing SSH key tar header
	I0205 03:30:12.960133   77242 main.go:141] libmachine: (bridge-253147) DBG | I0205 03:30:12.960037   77289 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20363-12788/.minikube/machines/bridge-253147 ...
	I0205 03:30:12.960152   77242 main.go:141] libmachine: (bridge-253147) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20363-12788/.minikube/machines/bridge-253147
	I0205 03:30:12.960168   77242 main.go:141] libmachine: (bridge-253147) setting executable bit set on /home/jenkins/minikube-integration/20363-12788/.minikube/machines/bridge-253147 (perms=drwx------)
	I0205 03:30:12.960183   77242 main.go:141] libmachine: (bridge-253147) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20363-12788/.minikube/machines
	I0205 03:30:12.960205   77242 main.go:141] libmachine: (bridge-253147) setting executable bit set on /home/jenkins/minikube-integration/20363-12788/.minikube/machines (perms=drwxr-xr-x)
	I0205 03:30:12.960221   77242 main.go:141] libmachine: (bridge-253147) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20363-12788/.minikube
	I0205 03:30:12.960237   77242 main.go:141] libmachine: (bridge-253147) setting executable bit set on /home/jenkins/minikube-integration/20363-12788/.minikube (perms=drwxr-xr-x)
	I0205 03:30:12.960249   77242 main.go:141] libmachine: (bridge-253147) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20363-12788
	I0205 03:30:12.960258   77242 main.go:141] libmachine: (bridge-253147) setting executable bit set on /home/jenkins/minikube-integration/20363-12788 (perms=drwxrwxr-x)
	I0205 03:30:12.960270   77242 main.go:141] libmachine: (bridge-253147) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0205 03:30:12.960287   77242 main.go:141] libmachine: (bridge-253147) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0205 03:30:12.960305   77242 main.go:141] libmachine: (bridge-253147) creating domain...
	I0205 03:30:12.960326   77242 main.go:141] libmachine: (bridge-253147) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0205 03:30:12.960345   77242 main.go:141] libmachine: (bridge-253147) DBG | checking permissions on dir: /home/jenkins
	I0205 03:30:12.960357   77242 main.go:141] libmachine: (bridge-253147) DBG | checking permissions on dir: /home
	I0205 03:30:12.960369   77242 main.go:141] libmachine: (bridge-253147) DBG | skipping /home - not owner
	I0205 03:30:12.961532   77242 main.go:141] libmachine: (bridge-253147) define libvirt domain using xml: 
	I0205 03:30:12.961556   77242 main.go:141] libmachine: (bridge-253147) <domain type='kvm'>
	I0205 03:30:12.961578   77242 main.go:141] libmachine: (bridge-253147)   <name>bridge-253147</name>
	I0205 03:30:12.961587   77242 main.go:141] libmachine: (bridge-253147)   <memory unit='MiB'>3072</memory>
	I0205 03:30:12.961599   77242 main.go:141] libmachine: (bridge-253147)   <vcpu>2</vcpu>
	I0205 03:30:12.961604   77242 main.go:141] libmachine: (bridge-253147)   <features>
	I0205 03:30:12.961611   77242 main.go:141] libmachine: (bridge-253147)     <acpi/>
	I0205 03:30:12.961625   77242 main.go:141] libmachine: (bridge-253147)     <apic/>
	I0205 03:30:12.961647   77242 main.go:141] libmachine: (bridge-253147)     <pae/>
	I0205 03:30:12.961662   77242 main.go:141] libmachine: (bridge-253147)     
	I0205 03:30:12.961670   77242 main.go:141] libmachine: (bridge-253147)   </features>
	I0205 03:30:12.961678   77242 main.go:141] libmachine: (bridge-253147)   <cpu mode='host-passthrough'>
	I0205 03:30:12.961688   77242 main.go:141] libmachine: (bridge-253147)   
	I0205 03:30:12.961699   77242 main.go:141] libmachine: (bridge-253147)   </cpu>
	I0205 03:30:12.961707   77242 main.go:141] libmachine: (bridge-253147)   <os>
	I0205 03:30:12.961713   77242 main.go:141] libmachine: (bridge-253147)     <type>hvm</type>
	I0205 03:30:12.961725   77242 main.go:141] libmachine: (bridge-253147)     <boot dev='cdrom'/>
	I0205 03:30:12.961735   77242 main.go:141] libmachine: (bridge-253147)     <boot dev='hd'/>
	I0205 03:30:12.961768   77242 main.go:141] libmachine: (bridge-253147)     <bootmenu enable='no'/>
	I0205 03:30:12.961793   77242 main.go:141] libmachine: (bridge-253147)   </os>
	I0205 03:30:12.961802   77242 main.go:141] libmachine: (bridge-253147)   <devices>
	I0205 03:30:12.961813   77242 main.go:141] libmachine: (bridge-253147)     <disk type='file' device='cdrom'>
	I0205 03:30:12.961826   77242 main.go:141] libmachine: (bridge-253147)       <source file='/home/jenkins/minikube-integration/20363-12788/.minikube/machines/bridge-253147/boot2docker.iso'/>
	I0205 03:30:12.961835   77242 main.go:141] libmachine: (bridge-253147)       <target dev='hdc' bus='scsi'/>
	I0205 03:30:12.961852   77242 main.go:141] libmachine: (bridge-253147)       <readonly/>
	I0205 03:30:12.961865   77242 main.go:141] libmachine: (bridge-253147)     </disk>
	I0205 03:30:12.961885   77242 main.go:141] libmachine: (bridge-253147)     <disk type='file' device='disk'>
	I0205 03:30:12.961903   77242 main.go:141] libmachine: (bridge-253147)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0205 03:30:12.961921   77242 main.go:141] libmachine: (bridge-253147)       <source file='/home/jenkins/minikube-integration/20363-12788/.minikube/machines/bridge-253147/bridge-253147.rawdisk'/>
	I0205 03:30:12.961933   77242 main.go:141] libmachine: (bridge-253147)       <target dev='hda' bus='virtio'/>
	I0205 03:30:12.961944   77242 main.go:141] libmachine: (bridge-253147)     </disk>
	I0205 03:30:12.961955   77242 main.go:141] libmachine: (bridge-253147)     <interface type='network'>
	I0205 03:30:12.961968   77242 main.go:141] libmachine: (bridge-253147)       <source network='mk-bridge-253147'/>
	I0205 03:30:12.961976   77242 main.go:141] libmachine: (bridge-253147)       <model type='virtio'/>
	I0205 03:30:12.961995   77242 main.go:141] libmachine: (bridge-253147)     </interface>
	I0205 03:30:12.962002   77242 main.go:141] libmachine: (bridge-253147)     <interface type='network'>
	I0205 03:30:12.962012   77242 main.go:141] libmachine: (bridge-253147)       <source network='default'/>
	I0205 03:30:12.962022   77242 main.go:141] libmachine: (bridge-253147)       <model type='virtio'/>
	I0205 03:30:12.962032   77242 main.go:141] libmachine: (bridge-253147)     </interface>
	I0205 03:30:12.962042   77242 main.go:141] libmachine: (bridge-253147)     <serial type='pty'>
	I0205 03:30:12.962054   77242 main.go:141] libmachine: (bridge-253147)       <target port='0'/>
	I0205 03:30:12.962064   77242 main.go:141] libmachine: (bridge-253147)     </serial>
	I0205 03:30:12.962073   77242 main.go:141] libmachine: (bridge-253147)     <console type='pty'>
	I0205 03:30:12.962083   77242 main.go:141] libmachine: (bridge-253147)       <target type='serial' port='0'/>
	I0205 03:30:12.962092   77242 main.go:141] libmachine: (bridge-253147)     </console>
	I0205 03:30:12.962102   77242 main.go:141] libmachine: (bridge-253147)     <rng model='virtio'>
	I0205 03:30:12.962113   77242 main.go:141] libmachine: (bridge-253147)       <backend model='random'>/dev/random</backend>
	I0205 03:30:12.962123   77242 main.go:141] libmachine: (bridge-253147)     </rng>
	I0205 03:30:12.962133   77242 main.go:141] libmachine: (bridge-253147)     
	I0205 03:30:12.962142   77242 main.go:141] libmachine: (bridge-253147)     
	I0205 03:30:12.962150   77242 main.go:141] libmachine: (bridge-253147)   </devices>
	I0205 03:30:12.962157   77242 main.go:141] libmachine: (bridge-253147) </domain>
	I0205 03:30:12.962170   77242 main.go:141] libmachine: (bridge-253147) 
	I0205 03:30:12.969379   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:cf:7f:ba in network default
	I0205 03:30:12.970167   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:12.970200   77242 main.go:141] libmachine: (bridge-253147) starting domain...
	I0205 03:30:12.970220   77242 main.go:141] libmachine: (bridge-253147) ensuring networks are active...
	I0205 03:30:12.971065   77242 main.go:141] libmachine: (bridge-253147) Ensuring network default is active
	I0205 03:30:12.971477   77242 main.go:141] libmachine: (bridge-253147) Ensuring network mk-bridge-253147 is active
	I0205 03:30:12.972066   77242 main.go:141] libmachine: (bridge-253147) getting domain XML...
	I0205 03:30:12.972914   77242 main.go:141] libmachine: (bridge-253147) creating domain...
	I0205 03:30:14.273914   77242 main.go:141] libmachine: (bridge-253147) waiting for IP...
	I0205 03:30:14.274688   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:14.275235   77242 main.go:141] libmachine: (bridge-253147) DBG | unable to find current IP address of domain bridge-253147 in network mk-bridge-253147
	I0205 03:30:14.275342   77242 main.go:141] libmachine: (bridge-253147) DBG | I0205 03:30:14.275248   77289 retry.go:31] will retry after 305.177217ms: waiting for domain to come up
	I0205 03:30:14.581781   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:14.582455   77242 main.go:141] libmachine: (bridge-253147) DBG | unable to find current IP address of domain bridge-253147 in network mk-bridge-253147
	I0205 03:30:14.582483   77242 main.go:141] libmachine: (bridge-253147) DBG | I0205 03:30:14.582419   77289 retry.go:31] will retry after 267.088448ms: waiting for domain to come up
	I0205 03:30:14.850832   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:14.851332   77242 main.go:141] libmachine: (bridge-253147) DBG | unable to find current IP address of domain bridge-253147 in network mk-bridge-253147
	I0205 03:30:14.851369   77242 main.go:141] libmachine: (bridge-253147) DBG | I0205 03:30:14.851305   77289 retry.go:31] will retry after 408.091339ms: waiting for domain to come up
	I0205 03:30:15.261214   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:15.261815   77242 main.go:141] libmachine: (bridge-253147) DBG | unable to find current IP address of domain bridge-253147 in network mk-bridge-253147
	I0205 03:30:15.261850   77242 main.go:141] libmachine: (bridge-253147) DBG | I0205 03:30:15.261757   77289 retry.go:31] will retry after 594.941946ms: waiting for domain to come up
	I0205 03:30:15.860548   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:15.861097   77242 main.go:141] libmachine: (bridge-253147) DBG | unable to find current IP address of domain bridge-253147 in network mk-bridge-253147
	I0205 03:30:15.861275   77242 main.go:141] libmachine: (bridge-253147) DBG | I0205 03:30:15.861171   77289 retry.go:31] will retry after 628.329015ms: waiting for domain to come up
	I0205 03:30:16.491123   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:16.491724   77242 main.go:141] libmachine: (bridge-253147) DBG | unable to find current IP address of domain bridge-253147 in network mk-bridge-253147
	I0205 03:30:16.491768   77242 main.go:141] libmachine: (bridge-253147) DBG | I0205 03:30:16.491692   77289 retry.go:31] will retry after 777.442694ms: waiting for domain to come up
	I0205 03:30:13.971691   77491 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0205 03:30:13.971753   77491 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20363-12788/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0205 03:30:13.971767   77491 cache.go:56] Caching tarball of preloaded images
	I0205 03:30:13.971880   77491 preload.go:172] Found /home/jenkins/minikube-integration/20363-12788/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0205 03:30:13.971895   77491 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0205 03:30:13.972022   77491 profile.go:143] Saving config to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/config.json ...
	I0205 03:30:13.972046   77491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/config.json: {Name:mk2d8203c5bd379ff80e35aa7d483c877cb991a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:30:13.972236   77491 start.go:360] acquireMachinesLock for enable-default-cni-253147: {Name:mka859d8706e94e04a549fcebb98cdac86bfe5a9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0205 03:30:17.270468   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:17.270930   77242 main.go:141] libmachine: (bridge-253147) DBG | unable to find current IP address of domain bridge-253147 in network mk-bridge-253147
	I0205 03:30:17.270970   77242 main.go:141] libmachine: (bridge-253147) DBG | I0205 03:30:17.270899   77289 retry.go:31] will retry after 1.142243743s: waiting for domain to come up
	I0205 03:30:18.414357   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:18.414829   77242 main.go:141] libmachine: (bridge-253147) DBG | unable to find current IP address of domain bridge-253147 in network mk-bridge-253147
	I0205 03:30:18.414855   77242 main.go:141] libmachine: (bridge-253147) DBG | I0205 03:30:18.414802   77289 retry.go:31] will retry after 1.264093425s: waiting for domain to come up
	I0205 03:30:19.681132   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:19.681619   77242 main.go:141] libmachine: (bridge-253147) DBG | unable to find current IP address of domain bridge-253147 in network mk-bridge-253147
	I0205 03:30:19.681640   77242 main.go:141] libmachine: (bridge-253147) DBG | I0205 03:30:19.681609   77289 retry.go:31] will retry after 1.561141318s: waiting for domain to come up
	I0205 03:30:21.245250   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:21.245808   77242 main.go:141] libmachine: (bridge-253147) DBG | unable to find current IP address of domain bridge-253147 in network mk-bridge-253147
	I0205 03:30:21.245866   77242 main.go:141] libmachine: (bridge-253147) DBG | I0205 03:30:21.245809   77289 retry.go:31] will retry after 1.818541717s: waiting for domain to come up
	I0205 03:30:23.066293   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:23.066843   77242 main.go:141] libmachine: (bridge-253147) DBG | unable to find current IP address of domain bridge-253147 in network mk-bridge-253147
	I0205 03:30:23.066870   77242 main.go:141] libmachine: (bridge-253147) DBG | I0205 03:30:23.066808   77289 retry.go:31] will retry after 2.860967461s: waiting for domain to come up
	I0205 03:30:25.929813   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:25.930339   77242 main.go:141] libmachine: (bridge-253147) DBG | unable to find current IP address of domain bridge-253147 in network mk-bridge-253147
	I0205 03:30:25.930377   77242 main.go:141] libmachine: (bridge-253147) DBG | I0205 03:30:25.930305   77289 retry.go:31] will retry after 2.262438462s: waiting for domain to come up
	I0205 03:30:28.194336   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:28.194742   77242 main.go:141] libmachine: (bridge-253147) DBG | unable to find current IP address of domain bridge-253147 in network mk-bridge-253147
	I0205 03:30:28.194764   77242 main.go:141] libmachine: (bridge-253147) DBG | I0205 03:30:28.194725   77289 retry.go:31] will retry after 2.755818062s: waiting for domain to come up
	I0205 03:30:30.952245   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:30.952691   77242 main.go:141] libmachine: (bridge-253147) DBG | unable to find current IP address of domain bridge-253147 in network mk-bridge-253147
	I0205 03:30:30.952714   77242 main.go:141] libmachine: (bridge-253147) DBG | I0205 03:30:30.952656   77289 retry.go:31] will retry after 3.807968232s: waiting for domain to come up
	I0205 03:30:36.566403   77491 start.go:364] duration metric: took 22.594138599s to acquireMachinesLock for "enable-default-cni-253147"
	I0205 03:30:36.566456   77491 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-253147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:enable-de
fault-cni-253147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0205 03:30:36.566550   77491 start.go:125] createHost starting for "" (driver="kvm2")
	I0205 03:30:34.762374   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:34.762960   77242 main.go:141] libmachine: (bridge-253147) found domain IP: 192.168.50.246
	I0205 03:30:34.762979   77242 main.go:141] libmachine: (bridge-253147) reserving static IP address...
	I0205 03:30:34.762992   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has current primary IP address 192.168.50.246 and MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:34.763299   77242 main.go:141] libmachine: (bridge-253147) DBG | unable to find host DHCP lease matching {name: "bridge-253147", mac: "52:54:00:8f:4a:f9", ip: "192.168.50.246"} in network mk-bridge-253147
	I0205 03:30:34.838594   77242 main.go:141] libmachine: (bridge-253147) DBG | Getting to WaitForSSH function...
	I0205 03:30:34.838624   77242 main.go:141] libmachine: (bridge-253147) reserved static IP address 192.168.50.246 for domain bridge-253147
	I0205 03:30:34.838637   77242 main.go:141] libmachine: (bridge-253147) waiting for SSH...
	I0205 03:30:34.841690   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:34.842136   77242 main.go:141] libmachine: (bridge-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:4a:f9", ip: ""} in network mk-bridge-253147: {Iface:virbr1 ExpiryTime:2025-02-05 04:30:27 +0000 UTC Type:0 Mac:52:54:00:8f:4a:f9 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:minikube Clientid:01:52:54:00:8f:4a:f9}
	I0205 03:30:34.842163   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined IP address 192.168.50.246 and MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:34.842340   77242 main.go:141] libmachine: (bridge-253147) DBG | Using SSH client type: external
	I0205 03:30:34.842360   77242 main.go:141] libmachine: (bridge-253147) DBG | Using SSH private key: /home/jenkins/minikube-integration/20363-12788/.minikube/machines/bridge-253147/id_rsa (-rw-------)
	I0205 03:30:34.842399   77242 main.go:141] libmachine: (bridge-253147) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.246 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20363-12788/.minikube/machines/bridge-253147/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0205 03:30:34.842418   77242 main.go:141] libmachine: (bridge-253147) DBG | About to run SSH command:
	I0205 03:30:34.842438   77242 main.go:141] libmachine: (bridge-253147) DBG | exit 0
	I0205 03:30:34.973228   77242 main.go:141] libmachine: (bridge-253147) DBG | SSH cmd err, output: <nil>: 
	I0205 03:30:34.973525   77242 main.go:141] libmachine: (bridge-253147) KVM machine creation complete
	I0205 03:30:34.973823   77242 main.go:141] libmachine: (bridge-253147) Calling .GetConfigRaw
	I0205 03:30:34.974541   77242 main.go:141] libmachine: (bridge-253147) Calling .DriverName
	I0205 03:30:34.974708   77242 main.go:141] libmachine: (bridge-253147) Calling .DriverName
	I0205 03:30:34.974905   77242 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0205 03:30:34.974920   77242 main.go:141] libmachine: (bridge-253147) Calling .GetState
	I0205 03:30:34.976304   77242 main.go:141] libmachine: Detecting operating system of created instance...
	I0205 03:30:34.976320   77242 main.go:141] libmachine: Waiting for SSH to be available...
	I0205 03:30:34.976327   77242 main.go:141] libmachine: Getting to WaitForSSH function...
	I0205 03:30:34.976334   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHHostname
	I0205 03:30:34.979685   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:34.980296   77242 main.go:141] libmachine: (bridge-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:4a:f9", ip: ""} in network mk-bridge-253147: {Iface:virbr1 ExpiryTime:2025-02-05 04:30:27 +0000 UTC Type:0 Mac:52:54:00:8f:4a:f9 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:bridge-253147 Clientid:01:52:54:00:8f:4a:f9}
	I0205 03:30:34.980339   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined IP address 192.168.50.246 and MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:34.980477   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHPort
	I0205 03:30:34.980624   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHKeyPath
	I0205 03:30:34.980749   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHKeyPath
	I0205 03:30:34.980852   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHUsername
	I0205 03:30:34.981007   77242 main.go:141] libmachine: Using SSH client type: native
	I0205 03:30:34.981220   77242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I0205 03:30:34.981232   77242 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0205 03:30:35.096392   77242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0205 03:30:35.096414   77242 main.go:141] libmachine: Detecting the provisioner...
	I0205 03:30:35.096421   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHHostname
	I0205 03:30:35.099236   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:35.099611   77242 main.go:141] libmachine: (bridge-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:4a:f9", ip: ""} in network mk-bridge-253147: {Iface:virbr1 ExpiryTime:2025-02-05 04:30:27 +0000 UTC Type:0 Mac:52:54:00:8f:4a:f9 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:bridge-253147 Clientid:01:52:54:00:8f:4a:f9}
	I0205 03:30:35.099643   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined IP address 192.168.50.246 and MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:35.099833   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHPort
	I0205 03:30:35.100002   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHKeyPath
	I0205 03:30:35.100163   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHKeyPath
	I0205 03:30:35.100285   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHUsername
	I0205 03:30:35.100429   77242 main.go:141] libmachine: Using SSH client type: native
	I0205 03:30:35.100604   77242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I0205 03:30:35.100616   77242 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0205 03:30:35.218495   77242 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0205 03:30:35.218548   77242 main.go:141] libmachine: found compatible host: buildroot
	I0205 03:30:35.218560   77242 main.go:141] libmachine: Provisioning with buildroot...
	I0205 03:30:35.218570   77242 main.go:141] libmachine: (bridge-253147) Calling .GetMachineName
	I0205 03:30:35.218775   77242 buildroot.go:166] provisioning hostname "bridge-253147"
	I0205 03:30:35.218801   77242 main.go:141] libmachine: (bridge-253147) Calling .GetMachineName
	I0205 03:30:35.218961   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHHostname
	I0205 03:30:35.221601   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:35.221894   77242 main.go:141] libmachine: (bridge-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:4a:f9", ip: ""} in network mk-bridge-253147: {Iface:virbr1 ExpiryTime:2025-02-05 04:30:27 +0000 UTC Type:0 Mac:52:54:00:8f:4a:f9 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:bridge-253147 Clientid:01:52:54:00:8f:4a:f9}
	I0205 03:30:35.221926   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined IP address 192.168.50.246 and MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:35.222080   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHPort
	I0205 03:30:35.222254   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHKeyPath
	I0205 03:30:35.222429   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHKeyPath
	I0205 03:30:35.222566   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHUsername
	I0205 03:30:35.222709   77242 main.go:141] libmachine: Using SSH client type: native
	I0205 03:30:35.222922   77242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I0205 03:30:35.222943   77242 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-253147 && echo "bridge-253147" | sudo tee /etc/hostname
	I0205 03:30:35.359778   77242 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-253147
	
	I0205 03:30:35.359806   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHHostname
	I0205 03:30:35.362911   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:35.363387   77242 main.go:141] libmachine: (bridge-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:4a:f9", ip: ""} in network mk-bridge-253147: {Iface:virbr1 ExpiryTime:2025-02-05 04:30:27 +0000 UTC Type:0 Mac:52:54:00:8f:4a:f9 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:bridge-253147 Clientid:01:52:54:00:8f:4a:f9}
	I0205 03:30:35.363415   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined IP address 192.168.50.246 and MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:35.363627   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHPort
	I0205 03:30:35.363816   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHKeyPath
	I0205 03:30:35.363976   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHKeyPath
	I0205 03:30:35.364149   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHUsername
	I0205 03:30:35.364339   77242 main.go:141] libmachine: Using SSH client type: native
	I0205 03:30:35.364536   77242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I0205 03:30:35.364555   77242 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-253147' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-253147/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-253147' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0205 03:30:35.493785   77242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0205 03:30:35.493813   77242 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20363-12788/.minikube CaCertPath:/home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20363-12788/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20363-12788/.minikube}
	I0205 03:30:35.493842   77242 buildroot.go:174] setting up certificates
	I0205 03:30:35.493852   77242 provision.go:84] configureAuth start
	I0205 03:30:35.493860   77242 main.go:141] libmachine: (bridge-253147) Calling .GetMachineName
	I0205 03:30:35.494097   77242 main.go:141] libmachine: (bridge-253147) Calling .GetIP
	I0205 03:30:35.496551   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:35.496935   77242 main.go:141] libmachine: (bridge-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:4a:f9", ip: ""} in network mk-bridge-253147: {Iface:virbr1 ExpiryTime:2025-02-05 04:30:27 +0000 UTC Type:0 Mac:52:54:00:8f:4a:f9 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:bridge-253147 Clientid:01:52:54:00:8f:4a:f9}
	I0205 03:30:35.496967   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined IP address 192.168.50.246 and MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:35.497072   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHHostname
	I0205 03:30:35.499546   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:35.499951   77242 main.go:141] libmachine: (bridge-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:4a:f9", ip: ""} in network mk-bridge-253147: {Iface:virbr1 ExpiryTime:2025-02-05 04:30:27 +0000 UTC Type:0 Mac:52:54:00:8f:4a:f9 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:bridge-253147 Clientid:01:52:54:00:8f:4a:f9}
	I0205 03:30:35.499978   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined IP address 192.168.50.246 and MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:35.500139   77242 provision.go:143] copyHostCerts
	I0205 03:30:35.500204   77242 exec_runner.go:144] found /home/jenkins/minikube-integration/20363-12788/.minikube/ca.pem, removing ...
	I0205 03:30:35.500226   77242 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20363-12788/.minikube/ca.pem
	I0205 03:30:35.500312   77242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20363-12788/.minikube/ca.pem (1082 bytes)
	I0205 03:30:35.500409   77242 exec_runner.go:144] found /home/jenkins/minikube-integration/20363-12788/.minikube/cert.pem, removing ...
	I0205 03:30:35.500418   77242 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20363-12788/.minikube/cert.pem
	I0205 03:30:35.500445   77242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20363-12788/.minikube/cert.pem (1123 bytes)
	I0205 03:30:35.500510   77242 exec_runner.go:144] found /home/jenkins/minikube-integration/20363-12788/.minikube/key.pem, removing ...
	I0205 03:30:35.500517   77242 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20363-12788/.minikube/key.pem
	I0205 03:30:35.500538   77242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20363-12788/.minikube/key.pem (1675 bytes)
	I0205 03:30:35.500599   77242 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20363-12788/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca-key.pem org=jenkins.bridge-253147 san=[127.0.0.1 192.168.50.246 bridge-253147 localhost minikube]
	I0205 03:30:35.882545   77242 provision.go:177] copyRemoteCerts
	I0205 03:30:35.882601   77242 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0205 03:30:35.882621   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHHostname
	I0205 03:30:35.885264   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:35.885625   77242 main.go:141] libmachine: (bridge-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:4a:f9", ip: ""} in network mk-bridge-253147: {Iface:virbr1 ExpiryTime:2025-02-05 04:30:27 +0000 UTC Type:0 Mac:52:54:00:8f:4a:f9 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:bridge-253147 Clientid:01:52:54:00:8f:4a:f9}
	I0205 03:30:35.885661   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined IP address 192.168.50.246 and MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:35.885847   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHPort
	I0205 03:30:35.886014   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHKeyPath
	I0205 03:30:35.886182   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHUsername
	I0205 03:30:35.886311   77242 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/bridge-253147/id_rsa Username:docker}
	I0205 03:30:35.975581   77242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0205 03:30:36.000767   77242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0205 03:30:36.026723   77242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0205 03:30:36.057309   77242 provision.go:87] duration metric: took 563.440863ms to configureAuth
	I0205 03:30:36.057364   77242 buildroot.go:189] setting minikube options for container-runtime
	I0205 03:30:36.057565   77242 config.go:182] Loaded profile config "bridge-253147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 03:30:36.057639   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHHostname
	I0205 03:30:36.060404   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:36.060803   77242 main.go:141] libmachine: (bridge-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:4a:f9", ip: ""} in network mk-bridge-253147: {Iface:virbr1 ExpiryTime:2025-02-05 04:30:27 +0000 UTC Type:0 Mac:52:54:00:8f:4a:f9 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:bridge-253147 Clientid:01:52:54:00:8f:4a:f9}
	I0205 03:30:36.060835   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined IP address 192.168.50.246 and MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:36.061047   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHPort
	I0205 03:30:36.061260   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHKeyPath
	I0205 03:30:36.061427   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHKeyPath
	I0205 03:30:36.061575   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHUsername
	I0205 03:30:36.061769   77242 main.go:141] libmachine: Using SSH client type: native
	I0205 03:30:36.061968   77242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I0205 03:30:36.061989   77242 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0205 03:30:36.298630   77242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0205 03:30:36.298659   77242 main.go:141] libmachine: Checking connection to Docker...
	I0205 03:30:36.298669   77242 main.go:141] libmachine: (bridge-253147) Calling .GetURL
	I0205 03:30:36.299977   77242 main.go:141] libmachine: (bridge-253147) DBG | using libvirt version 6000000
	I0205 03:30:36.302353   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:36.302738   77242 main.go:141] libmachine: (bridge-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:4a:f9", ip: ""} in network mk-bridge-253147: {Iface:virbr1 ExpiryTime:2025-02-05 04:30:27 +0000 UTC Type:0 Mac:52:54:00:8f:4a:f9 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:bridge-253147 Clientid:01:52:54:00:8f:4a:f9}
	I0205 03:30:36.302779   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined IP address 192.168.50.246 and MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:36.302961   77242 main.go:141] libmachine: Docker is up and running!
	I0205 03:30:36.302971   77242 main.go:141] libmachine: Reticulating splines...
	I0205 03:30:36.302977   77242 client.go:171] duration metric: took 24.494043678s to LocalClient.Create
	I0205 03:30:36.302997   77242 start.go:167] duration metric: took 24.494106892s to libmachine.API.Create "bridge-253147"
	I0205 03:30:36.303007   77242 start.go:293] postStartSetup for "bridge-253147" (driver="kvm2")
	I0205 03:30:36.303015   77242 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0205 03:30:36.303031   77242 main.go:141] libmachine: (bridge-253147) Calling .DriverName
	I0205 03:30:36.303251   77242 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0205 03:30:36.303284   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHHostname
	I0205 03:30:36.305501   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:36.305932   77242 main.go:141] libmachine: (bridge-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:4a:f9", ip: ""} in network mk-bridge-253147: {Iface:virbr1 ExpiryTime:2025-02-05 04:30:27 +0000 UTC Type:0 Mac:52:54:00:8f:4a:f9 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:bridge-253147 Clientid:01:52:54:00:8f:4a:f9}
	I0205 03:30:36.305957   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined IP address 192.168.50.246 and MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:36.306294   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHPort
	I0205 03:30:36.306472   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHKeyPath
	I0205 03:30:36.306619   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHUsername
	I0205 03:30:36.306722   77242 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/bridge-253147/id_rsa Username:docker}
	I0205 03:30:36.403469   77242 ssh_runner.go:195] Run: cat /etc/os-release
	I0205 03:30:36.407367   77242 info.go:137] Remote host: Buildroot 2023.02.9
	I0205 03:30:36.407383   77242 filesync.go:126] Scanning /home/jenkins/minikube-integration/20363-12788/.minikube/addons for local assets ...
	I0205 03:30:36.407435   77242 filesync.go:126] Scanning /home/jenkins/minikube-integration/20363-12788/.minikube/files for local assets ...
	I0205 03:30:36.407512   77242 filesync.go:149] local asset: /home/jenkins/minikube-integration/20363-12788/.minikube/files/etc/ssl/certs/199892.pem -> 199892.pem in /etc/ssl/certs
	I0205 03:30:36.407604   77242 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0205 03:30:36.416957   77242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/files/etc/ssl/certs/199892.pem --> /etc/ssl/certs/199892.pem (1708 bytes)
	I0205 03:30:36.441130   77242 start.go:296] duration metric: took 138.106155ms for postStartSetup
	I0205 03:30:36.441177   77242 main.go:141] libmachine: (bridge-253147) Calling .GetConfigRaw
	I0205 03:30:36.441741   77242 main.go:141] libmachine: (bridge-253147) Calling .GetIP
	I0205 03:30:36.444247   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:36.444572   77242 main.go:141] libmachine: (bridge-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:4a:f9", ip: ""} in network mk-bridge-253147: {Iface:virbr1 ExpiryTime:2025-02-05 04:30:27 +0000 UTC Type:0 Mac:52:54:00:8f:4a:f9 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:bridge-253147 Clientid:01:52:54:00:8f:4a:f9}
	I0205 03:30:36.444603   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined IP address 192.168.50.246 and MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:36.444823   77242 profile.go:143] Saving config to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/config.json ...
	I0205 03:30:36.445036   77242 start.go:128] duration metric: took 24.654147667s to createHost
	I0205 03:30:36.445058   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHHostname
	I0205 03:30:36.447141   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:36.447406   77242 main.go:141] libmachine: (bridge-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:4a:f9", ip: ""} in network mk-bridge-253147: {Iface:virbr1 ExpiryTime:2025-02-05 04:30:27 +0000 UTC Type:0 Mac:52:54:00:8f:4a:f9 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:bridge-253147 Clientid:01:52:54:00:8f:4a:f9}
	I0205 03:30:36.447434   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined IP address 192.168.50.246 and MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:36.447526   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHPort
	I0205 03:30:36.447690   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHKeyPath
	I0205 03:30:36.447849   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHKeyPath
	I0205 03:30:36.447996   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHUsername
	I0205 03:30:36.448157   77242 main.go:141] libmachine: Using SSH client type: native
	I0205 03:30:36.448326   77242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I0205 03:30:36.448337   77242 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0205 03:30:36.566239   77242 main.go:141] libmachine: SSH cmd err, output: <nil>: 1738726236.524736391
	
	I0205 03:30:36.566263   77242 fix.go:216] guest clock: 1738726236.524736391
	I0205 03:30:36.566276   77242 fix.go:229] Guest: 2025-02-05 03:30:36.524736391 +0000 UTC Remote: 2025-02-05 03:30:36.445048492 +0000 UTC m=+24.767683288 (delta=79.687899ms)
	I0205 03:30:36.566299   77242 fix.go:200] guest clock delta is within tolerance: 79.687899ms
	I0205 03:30:36.566306   77242 start.go:83] releasing machines lock for "bridge-253147", held for 24.775483528s
	I0205 03:30:36.566341   77242 main.go:141] libmachine: (bridge-253147) Calling .DriverName
	I0205 03:30:36.566706   77242 main.go:141] libmachine: (bridge-253147) Calling .GetIP
	I0205 03:30:36.570113   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:36.570549   77242 main.go:141] libmachine: (bridge-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:4a:f9", ip: ""} in network mk-bridge-253147: {Iface:virbr1 ExpiryTime:2025-02-05 04:30:27 +0000 UTC Type:0 Mac:52:54:00:8f:4a:f9 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:bridge-253147 Clientid:01:52:54:00:8f:4a:f9}
	I0205 03:30:36.570577   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined IP address 192.168.50.246 and MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:36.570758   77242 main.go:141] libmachine: (bridge-253147) Calling .DriverName
	I0205 03:30:36.571264   77242 main.go:141] libmachine: (bridge-253147) Calling .DriverName
	I0205 03:30:36.571437   77242 main.go:141] libmachine: (bridge-253147) Calling .DriverName
	I0205 03:30:36.571568   77242 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0205 03:30:36.571625   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHHostname
	I0205 03:30:36.571732   77242 ssh_runner.go:195] Run: cat /version.json
	I0205 03:30:36.571757   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHHostname
	I0205 03:30:36.574435   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:36.574774   77242 main.go:141] libmachine: (bridge-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:4a:f9", ip: ""} in network mk-bridge-253147: {Iface:virbr1 ExpiryTime:2025-02-05 04:30:27 +0000 UTC Type:0 Mac:52:54:00:8f:4a:f9 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:bridge-253147 Clientid:01:52:54:00:8f:4a:f9}
	I0205 03:30:36.574794   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined IP address 192.168.50.246 and MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:36.574815   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:36.575013   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHPort
	I0205 03:30:36.575172   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHKeyPath
	I0205 03:30:36.575202   77242 main.go:141] libmachine: (bridge-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:4a:f9", ip: ""} in network mk-bridge-253147: {Iface:virbr1 ExpiryTime:2025-02-05 04:30:27 +0000 UTC Type:0 Mac:52:54:00:8f:4a:f9 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:bridge-253147 Clientid:01:52:54:00:8f:4a:f9}
	I0205 03:30:36.575224   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined IP address 192.168.50.246 and MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:36.575423   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHUsername
	I0205 03:30:36.575485   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHPort
	I0205 03:30:36.575574   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHKeyPath
	I0205 03:30:36.575594   77242 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/bridge-253147/id_rsa Username:docker}
	I0205 03:30:36.575707   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHUsername
	I0205 03:30:36.575863   77242 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/bridge-253147/id_rsa Username:docker}
	I0205 03:30:36.666606   77242 ssh_runner.go:195] Run: systemctl --version
	I0205 03:30:36.689181   77242 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0205 03:30:36.844523   77242 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0205 03:30:36.851067   77242 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0205 03:30:36.851145   77242 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0205 03:30:36.877967   77242 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0205 03:30:36.878000   77242 start.go:495] detecting cgroup driver to use...
	I0205 03:30:36.878076   77242 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0205 03:30:36.902472   77242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0205 03:30:36.919323   77242 docker.go:217] disabling cri-docker service (if available) ...
	I0205 03:30:36.919375   77242 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0205 03:30:36.935680   77242 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0205 03:30:36.952117   77242 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0205 03:30:37.090962   77242 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0205 03:30:37.260333   77242 docker.go:233] disabling docker service ...
	I0205 03:30:37.260399   77242 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0205 03:30:37.274613   77242 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0205 03:30:37.287948   77242 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0205 03:30:37.434874   77242 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0205 03:30:37.547055   77242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0205 03:30:37.561538   77242 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0205 03:30:37.580522   77242 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0205 03:30:37.580577   77242 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:30:37.591002   77242 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0205 03:30:37.591078   77242 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:30:37.601654   77242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:30:37.612609   77242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:30:37.629512   77242 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0205 03:30:37.639950   77242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:30:37.650310   77242 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:30:37.666925   77242 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:30:37.677358   77242 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0205 03:30:37.686660   77242 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0205 03:30:37.686720   77242 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0205 03:30:37.700081   77242 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0205 03:30:37.709751   77242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0205 03:30:37.818587   77242 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0205 03:30:37.910436   77242 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0205 03:30:37.910512   77242 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0205 03:30:37.915133   77242 start.go:563] Will wait 60s for crictl version
	I0205 03:30:37.915196   77242 ssh_runner.go:195] Run: which crictl
	I0205 03:30:37.918892   77242 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0205 03:30:37.960248   77242 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0205 03:30:37.960337   77242 ssh_runner.go:195] Run: crio --version
	I0205 03:30:37.988457   77242 ssh_runner.go:195] Run: crio --version
	I0205 03:30:38.019169   77242 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0205 03:30:36.568227   77491 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0205 03:30:36.568439   77491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:30:36.568499   77491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:30:36.586399   77491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43681
	I0205 03:30:36.586865   77491 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:30:36.587473   77491 main.go:141] libmachine: Using API Version  1
	I0205 03:30:36.587506   77491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:30:36.587862   77491 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:30:36.588080   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetMachineName
	I0205 03:30:36.588278   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .DriverName
	I0205 03:30:36.588507   77491 start.go:159] libmachine.API.Create for "enable-default-cni-253147" (driver="kvm2")
	I0205 03:30:36.588537   77491 client.go:168] LocalClient.Create starting
	I0205 03:30:36.588571   77491 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem
	I0205 03:30:36.588609   77491 main.go:141] libmachine: Decoding PEM data...
	I0205 03:30:36.588632   77491 main.go:141] libmachine: Parsing certificate...
	I0205 03:30:36.588699   77491 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20363-12788/.minikube/certs/cert.pem
	I0205 03:30:36.588725   77491 main.go:141] libmachine: Decoding PEM data...
	I0205 03:30:36.588743   77491 main.go:141] libmachine: Parsing certificate...
	I0205 03:30:36.588764   77491 main.go:141] libmachine: Running pre-create checks...
	I0205 03:30:36.588779   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .PreCreateCheck
	I0205 03:30:36.589248   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetConfigRaw
	I0205 03:30:36.589780   77491 main.go:141] libmachine: Creating machine...
	I0205 03:30:36.589798   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .Create
	I0205 03:30:36.589992   77491 main.go:141] libmachine: (enable-default-cni-253147) creating KVM machine...
	I0205 03:30:36.590011   77491 main.go:141] libmachine: (enable-default-cni-253147) creating network...
	I0205 03:30:36.596901   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | found existing default KVM network
	I0205 03:30:36.598770   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | I0205 03:30:36.598586   79034 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:e6:1a:05} reservation:<nil>}
	I0205 03:30:36.599989   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | I0205 03:30:36.599894   79034 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:13:c2:cc} reservation:<nil>}
	I0205 03:30:36.600768   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | I0205 03:30:36.600689   79034 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:30:9e:5e} reservation:<nil>}
	I0205 03:30:36.601805   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | I0205 03:30:36.601724   79034 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003d05d0}
	I0205 03:30:36.601858   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | created network xml: 
	I0205 03:30:36.601878   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | <network>
	I0205 03:30:36.601886   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG |   <name>mk-enable-default-cni-253147</name>
	I0205 03:30:36.601892   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG |   <dns enable='no'/>
	I0205 03:30:36.601898   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG |   
	I0205 03:30:36.601907   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0205 03:30:36.601913   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG |     <dhcp>
	I0205 03:30:36.601918   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0205 03:30:36.601923   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG |     </dhcp>
	I0205 03:30:36.601927   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG |   </ip>
	I0205 03:30:36.601951   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG |   
	I0205 03:30:36.601978   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | </network>
	I0205 03:30:36.601993   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | 
	I0205 03:30:36.606932   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | trying to create private KVM network mk-enable-default-cni-253147 192.168.72.0/24...
	I0205 03:30:36.685628   77491 main.go:141] libmachine: (enable-default-cni-253147) setting up store path in /home/jenkins/minikube-integration/20363-12788/.minikube/machines/enable-default-cni-253147 ...
	I0205 03:30:36.685665   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | private KVM network mk-enable-default-cni-253147 192.168.72.0/24 created
	I0205 03:30:36.685677   77491 main.go:141] libmachine: (enable-default-cni-253147) building disk image from file:///home/jenkins/minikube-integration/20363-12788/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0205 03:30:36.685723   77491 main.go:141] libmachine: (enable-default-cni-253147) Downloading /home/jenkins/minikube-integration/20363-12788/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20363-12788/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0205 03:30:36.685743   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | I0205 03:30:36.685534   79034 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20363-12788/.minikube
	I0205 03:30:36.962955   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | I0205 03:30:36.962841   79034 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20363-12788/.minikube/machines/enable-default-cni-253147/id_rsa...
	I0205 03:30:37.048897   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | I0205 03:30:37.048737   79034 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20363-12788/.minikube/machines/enable-default-cni-253147/enable-default-cni-253147.rawdisk...
	I0205 03:30:37.048942   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | Writing magic tar header
	I0205 03:30:37.048992   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | Writing SSH key tar header
	I0205 03:30:37.049025   77491 main.go:141] libmachine: (enable-default-cni-253147) setting executable bit set on /home/jenkins/minikube-integration/20363-12788/.minikube/machines/enable-default-cni-253147 (perms=drwx------)
	I0205 03:30:37.049045   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | I0205 03:30:37.048849   79034 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20363-12788/.minikube/machines/enable-default-cni-253147 ...
	I0205 03:30:37.049076   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20363-12788/.minikube/machines/enable-default-cni-253147
	I0205 03:30:37.049090   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20363-12788/.minikube/machines
	I0205 03:30:37.049103   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20363-12788/.minikube
	I0205 03:30:37.049131   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20363-12788
	I0205 03:30:37.049146   77491 main.go:141] libmachine: (enable-default-cni-253147) setting executable bit set on /home/jenkins/minikube-integration/20363-12788/.minikube/machines (perms=drwxr-xr-x)
	I0205 03:30:37.049162   77491 main.go:141] libmachine: (enable-default-cni-253147) setting executable bit set on /home/jenkins/minikube-integration/20363-12788/.minikube (perms=drwxr-xr-x)
	I0205 03:30:37.049176   77491 main.go:141] libmachine: (enable-default-cni-253147) setting executable bit set on /home/jenkins/minikube-integration/20363-12788 (perms=drwxrwxr-x)
	I0205 03:30:37.049187   77491 main.go:141] libmachine: (enable-default-cni-253147) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0205 03:30:37.049201   77491 main.go:141] libmachine: (enable-default-cni-253147) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0205 03:30:37.049214   77491 main.go:141] libmachine: (enable-default-cni-253147) creating domain...
	I0205 03:30:37.049224   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0205 03:30:37.049235   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | checking permissions on dir: /home/jenkins
	I0205 03:30:37.049243   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | checking permissions on dir: /home
	I0205 03:30:37.049253   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | skipping /home - not owner
	I0205 03:30:37.050287   77491 main.go:141] libmachine: (enable-default-cni-253147) define libvirt domain using xml: 
	I0205 03:30:37.050315   77491 main.go:141] libmachine: (enable-default-cni-253147) <domain type='kvm'>
	I0205 03:30:37.050330   77491 main.go:141] libmachine: (enable-default-cni-253147)   <name>enable-default-cni-253147</name>
	I0205 03:30:37.050350   77491 main.go:141] libmachine: (enable-default-cni-253147)   <memory unit='MiB'>3072</memory>
	I0205 03:30:37.050360   77491 main.go:141] libmachine: (enable-default-cni-253147)   <vcpu>2</vcpu>
	I0205 03:30:37.050394   77491 main.go:141] libmachine: (enable-default-cni-253147)   <features>
	I0205 03:30:37.050406   77491 main.go:141] libmachine: (enable-default-cni-253147)     <acpi/>
	I0205 03:30:37.050413   77491 main.go:141] libmachine: (enable-default-cni-253147)     <apic/>
	I0205 03:30:37.050425   77491 main.go:141] libmachine: (enable-default-cni-253147)     <pae/>
	I0205 03:30:37.050432   77491 main.go:141] libmachine: (enable-default-cni-253147)     
	I0205 03:30:37.050446   77491 main.go:141] libmachine: (enable-default-cni-253147)   </features>
	I0205 03:30:37.050454   77491 main.go:141] libmachine: (enable-default-cni-253147)   <cpu mode='host-passthrough'>
	I0205 03:30:37.050466   77491 main.go:141] libmachine: (enable-default-cni-253147)   
	I0205 03:30:37.050473   77491 main.go:141] libmachine: (enable-default-cni-253147)   </cpu>
	I0205 03:30:37.050500   77491 main.go:141] libmachine: (enable-default-cni-253147)   <os>
	I0205 03:30:37.050523   77491 main.go:141] libmachine: (enable-default-cni-253147)     <type>hvm</type>
	I0205 03:30:37.050536   77491 main.go:141] libmachine: (enable-default-cni-253147)     <boot dev='cdrom'/>
	I0205 03:30:37.050550   77491 main.go:141] libmachine: (enable-default-cni-253147)     <boot dev='hd'/>
	I0205 03:30:37.050563   77491 main.go:141] libmachine: (enable-default-cni-253147)     <bootmenu enable='no'/>
	I0205 03:30:37.050570   77491 main.go:141] libmachine: (enable-default-cni-253147)   </os>
	I0205 03:30:37.050580   77491 main.go:141] libmachine: (enable-default-cni-253147)   <devices>
	I0205 03:30:37.050601   77491 main.go:141] libmachine: (enable-default-cni-253147)     <disk type='file' device='cdrom'>
	I0205 03:30:37.050616   77491 main.go:141] libmachine: (enable-default-cni-253147)       <source file='/home/jenkins/minikube-integration/20363-12788/.minikube/machines/enable-default-cni-253147/boot2docker.iso'/>
	I0205 03:30:37.050630   77491 main.go:141] libmachine: (enable-default-cni-253147)       <target dev='hdc' bus='scsi'/>
	I0205 03:30:37.050640   77491 main.go:141] libmachine: (enable-default-cni-253147)       <readonly/>
	I0205 03:30:37.050650   77491 main.go:141] libmachine: (enable-default-cni-253147)     </disk>
	I0205 03:30:37.050667   77491 main.go:141] libmachine: (enable-default-cni-253147)     <disk type='file' device='disk'>
	I0205 03:30:37.050701   77491 main.go:141] libmachine: (enable-default-cni-253147)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0205 03:30:37.050722   77491 main.go:141] libmachine: (enable-default-cni-253147)       <source file='/home/jenkins/minikube-integration/20363-12788/.minikube/machines/enable-default-cni-253147/enable-default-cni-253147.rawdisk'/>
	I0205 03:30:37.050735   77491 main.go:141] libmachine: (enable-default-cni-253147)       <target dev='hda' bus='virtio'/>
	I0205 03:30:37.050742   77491 main.go:141] libmachine: (enable-default-cni-253147)     </disk>
	I0205 03:30:37.050754   77491 main.go:141] libmachine: (enable-default-cni-253147)     <interface type='network'>
	I0205 03:30:37.050766   77491 main.go:141] libmachine: (enable-default-cni-253147)       <source network='mk-enable-default-cni-253147'/>
	I0205 03:30:37.050778   77491 main.go:141] libmachine: (enable-default-cni-253147)       <model type='virtio'/>
	I0205 03:30:37.050785   77491 main.go:141] libmachine: (enable-default-cni-253147)     </interface>
	I0205 03:30:37.050820   77491 main.go:141] libmachine: (enable-default-cni-253147)     <interface type='network'>
	I0205 03:30:37.050846   77491 main.go:141] libmachine: (enable-default-cni-253147)       <source network='default'/>
	I0205 03:30:37.050859   77491 main.go:141] libmachine: (enable-default-cni-253147)       <model type='virtio'/>
	I0205 03:30:37.050872   77491 main.go:141] libmachine: (enable-default-cni-253147)     </interface>
	I0205 03:30:37.050886   77491 main.go:141] libmachine: (enable-default-cni-253147)     <serial type='pty'>
	I0205 03:30:37.050896   77491 main.go:141] libmachine: (enable-default-cni-253147)       <target port='0'/>
	I0205 03:30:37.050910   77491 main.go:141] libmachine: (enable-default-cni-253147)     </serial>
	I0205 03:30:37.050935   77491 main.go:141] libmachine: (enable-default-cni-253147)     <console type='pty'>
	I0205 03:30:37.050954   77491 main.go:141] libmachine: (enable-default-cni-253147)       <target type='serial' port='0'/>
	I0205 03:30:37.050968   77491 main.go:141] libmachine: (enable-default-cni-253147)     </console>
	I0205 03:30:37.050981   77491 main.go:141] libmachine: (enable-default-cni-253147)     <rng model='virtio'>
	I0205 03:30:37.050996   77491 main.go:141] libmachine: (enable-default-cni-253147)       <backend model='random'>/dev/random</backend>
	I0205 03:30:37.051013   77491 main.go:141] libmachine: (enable-default-cni-253147)     </rng>
	I0205 03:30:37.051025   77491 main.go:141] libmachine: (enable-default-cni-253147)     
	I0205 03:30:37.051033   77491 main.go:141] libmachine: (enable-default-cni-253147)     
	I0205 03:30:37.051058   77491 main.go:141] libmachine: (enable-default-cni-253147)   </devices>
	I0205 03:30:37.051069   77491 main.go:141] libmachine: (enable-default-cni-253147) </domain>
	I0205 03:30:37.051084   77491 main.go:141] libmachine: (enable-default-cni-253147) 
	I0205 03:30:37.057819   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:ca:ee:a6 in network default
	I0205 03:30:37.058459   77491 main.go:141] libmachine: (enable-default-cni-253147) starting domain...
	I0205 03:30:37.058492   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:30:37.058503   77491 main.go:141] libmachine: (enable-default-cni-253147) ensuring networks are active...
	I0205 03:30:37.059172   77491 main.go:141] libmachine: (enable-default-cni-253147) Ensuring network default is active
	I0205 03:30:37.059502   77491 main.go:141] libmachine: (enable-default-cni-253147) Ensuring network mk-enable-default-cni-253147 is active
	I0205 03:30:37.060055   77491 main.go:141] libmachine: (enable-default-cni-253147) getting domain XML...
	I0205 03:30:37.060913   77491 main.go:141] libmachine: (enable-default-cni-253147) creating domain...
	I0205 03:30:38.021680   77242 main.go:141] libmachine: (bridge-253147) Calling .GetIP
	I0205 03:30:38.024681   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:38.025408   77242 main.go:141] libmachine: (bridge-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:4a:f9", ip: ""} in network mk-bridge-253147: {Iface:virbr1 ExpiryTime:2025-02-05 04:30:27 +0000 UTC Type:0 Mac:52:54:00:8f:4a:f9 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:bridge-253147 Clientid:01:52:54:00:8f:4a:f9}
	I0205 03:30:38.025438   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined IP address 192.168.50.246 and MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:38.025657   77242 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0205 03:30:38.030062   77242 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0205 03:30:38.042534   77242 kubeadm.go:883] updating cluster {Name:bridge-253147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-253147 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.50.246 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0205 03:30:38.042666   77242 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0205 03:30:38.042721   77242 ssh_runner.go:195] Run: sudo crictl images --output json
	I0205 03:30:38.073875   77242 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0205 03:30:38.073941   77242 ssh_runner.go:195] Run: which lz4
	I0205 03:30:38.077754   77242 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0205 03:30:38.081778   77242 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0205 03:30:38.081812   77242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0205 03:30:39.456367   77242 crio.go:462] duration metric: took 1.378647417s to copy over tarball
	I0205 03:30:39.456478   77242 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0205 03:30:41.702914   77242 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.246400913s)
	I0205 03:30:41.702942   77242 crio.go:469] duration metric: took 2.246548889s to extract the tarball
	I0205 03:30:41.702949   77242 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0205 03:30:38.623277   77491 main.go:141] libmachine: (enable-default-cni-253147) waiting for IP...
	I0205 03:30:38.624244   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:30:38.624745   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | unable to find current IP address of domain enable-default-cni-253147 in network mk-enable-default-cni-253147
	I0205 03:30:38.624808   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | I0205 03:30:38.624736   79034 retry.go:31] will retry after 225.225942ms: waiting for domain to come up
	I0205 03:30:38.851117   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:30:38.851700   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | unable to find current IP address of domain enable-default-cni-253147 in network mk-enable-default-cni-253147
	I0205 03:30:38.851736   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | I0205 03:30:38.851663   79034 retry.go:31] will retry after 298.69382ms: waiting for domain to come up
	I0205 03:30:39.152119   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:30:39.152754   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | unable to find current IP address of domain enable-default-cni-253147 in network mk-enable-default-cni-253147
	I0205 03:30:39.152784   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | I0205 03:30:39.152732   79034 retry.go:31] will retry after 386.740633ms: waiting for domain to come up
	I0205 03:30:39.541393   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:30:39.542023   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | unable to find current IP address of domain enable-default-cni-253147 in network mk-enable-default-cni-253147
	I0205 03:30:39.542053   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | I0205 03:30:39.541971   79034 retry.go:31] will retry after 608.707393ms: waiting for domain to come up
	I0205 03:30:40.152792   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:30:40.153372   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | unable to find current IP address of domain enable-default-cni-253147 in network mk-enable-default-cni-253147
	I0205 03:30:40.153416   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | I0205 03:30:40.153305   79034 retry.go:31] will retry after 759.53705ms: waiting for domain to come up
	I0205 03:30:40.914923   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:30:40.915442   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | unable to find current IP address of domain enable-default-cni-253147 in network mk-enable-default-cni-253147
	I0205 03:30:40.915482   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | I0205 03:30:40.915365   79034 retry.go:31] will retry after 831.206233ms: waiting for domain to come up
	I0205 03:30:41.747692   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:30:41.748289   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | unable to find current IP address of domain enable-default-cni-253147 in network mk-enable-default-cni-253147
	I0205 03:30:41.748312   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | I0205 03:30:41.748252   79034 retry.go:31] will retry after 976.271323ms: waiting for domain to come up
	I0205 03:30:42.725992   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:30:42.726511   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | unable to find current IP address of domain enable-default-cni-253147 in network mk-enable-default-cni-253147
	I0205 03:30:42.726541   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | I0205 03:30:42.726474   79034 retry.go:31] will retry after 1.384186891s: waiting for domain to come up
	I0205 03:30:41.742178   77242 ssh_runner.go:195] Run: sudo crictl images --output json
	I0205 03:30:41.783096   77242 crio.go:514] all images are preloaded for cri-o runtime.
	I0205 03:30:41.783121   77242 cache_images.go:84] Images are preloaded, skipping loading
	I0205 03:30:41.783129   77242 kubeadm.go:934] updating node { 192.168.50.246 8443 v1.32.1 crio true true} ...
	I0205 03:30:41.783238   77242 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-253147 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.246
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:bridge-253147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0205 03:30:41.783322   77242 ssh_runner.go:195] Run: crio config
	I0205 03:30:41.827102   77242 cni.go:84] Creating CNI manager for "bridge"
	I0205 03:30:41.827126   77242 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0205 03:30:41.827149   77242 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.246 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-253147 NodeName:bridge-253147 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.246"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.246 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0205 03:30:41.827274   77242 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.246
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-253147"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.246"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.246"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0205 03:30:41.827330   77242 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0205 03:30:41.838886   77242 binaries.go:44] Found k8s binaries, skipping transfer
	I0205 03:30:41.838962   77242 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0205 03:30:41.849628   77242 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0205 03:30:41.865611   77242 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0205 03:30:41.881881   77242 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I0205 03:30:41.899035   77242 ssh_runner.go:195] Run: grep 192.168.50.246	control-plane.minikube.internal$ /etc/hosts
	I0205 03:30:41.903180   77242 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.246	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0205 03:30:41.915482   77242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0205 03:30:42.037310   77242 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0205 03:30:42.054641   77242 certs.go:68] Setting up /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147 for IP: 192.168.50.246
	I0205 03:30:42.054670   77242 certs.go:194] generating shared ca certs ...
	I0205 03:30:42.054687   77242 certs.go:226] acquiring lock for ca certs: {Name:mkca3cbb562bd1b517883c3b07fce8acc6cc9038 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:30:42.054872   77242 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20363-12788/.minikube/ca.key
	I0205 03:30:42.054937   77242 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20363-12788/.minikube/proxy-client-ca.key
	I0205 03:30:42.054951   77242 certs.go:256] generating profile certs ...
	I0205 03:30:42.055020   77242 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/client.key
	I0205 03:30:42.055037   77242 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/client.crt with IP's: []
	I0205 03:30:42.569882   77242 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/client.crt ...
	I0205 03:30:42.569913   77242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/client.crt: {Name:mk9a07762772c282594ff48594c243d2d9334ccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:30:42.570097   77242 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/client.key ...
	I0205 03:30:42.570118   77242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/client.key: {Name:mk87a68ecf8140f29e5563ad400fddaa65c48f0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:30:42.570236   77242 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/apiserver.key.dc1c4caa
	I0205 03:30:42.570253   77242 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/apiserver.crt.dc1c4caa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.246]
	I0205 03:30:42.774168   77242 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/apiserver.crt.dc1c4caa ...
	I0205 03:30:42.774204   77242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/apiserver.crt.dc1c4caa: {Name:mk67a191c1ba6ea30d49291e3357f57aedb3b4d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:30:42.774371   77242 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/apiserver.key.dc1c4caa ...
	I0205 03:30:42.774382   77242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/apiserver.key.dc1c4caa: {Name:mk2bd494cab1fae00acfa6c66a4fba8665b6a2f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:30:42.774453   77242 certs.go:381] copying /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/apiserver.crt.dc1c4caa -> /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/apiserver.crt
	I0205 03:30:42.774521   77242 certs.go:385] copying /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/apiserver.key.dc1c4caa -> /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/apiserver.key
	I0205 03:30:42.774572   77242 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/proxy-client.key
	I0205 03:30:42.774587   77242 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/proxy-client.crt with IP's: []
	I0205 03:30:42.937538   77242 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/proxy-client.crt ...
	I0205 03:30:42.937565   77242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/proxy-client.crt: {Name:mk1f0ff274bc255dae590ed4bd030fbfba893f8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:30:42.937751   77242 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/proxy-client.key ...
	I0205 03:30:42.937766   77242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/proxy-client.key: {Name:mk9f536bd3f2c933036a6bc72e71b0bba8b96640 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:30:42.937961   77242 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/19989.pem (1338 bytes)
	W0205 03:30:42.937997   77242 certs.go:480] ignoring /home/jenkins/minikube-integration/20363-12788/.minikube/certs/19989_empty.pem, impossibly tiny 0 bytes
	I0205 03:30:42.938006   77242 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca-key.pem (1679 bytes)
	I0205 03:30:42.938028   77242 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem (1082 bytes)
	I0205 03:30:42.938051   77242 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/cert.pem (1123 bytes)
	I0205 03:30:42.938072   77242 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/key.pem (1675 bytes)
	I0205 03:30:42.938108   77242 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/files/etc/ssl/certs/199892.pem (1708 bytes)
	I0205 03:30:42.938629   77242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0205 03:30:42.972473   77242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0205 03:30:42.998804   77242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0205 03:30:43.025966   77242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0205 03:30:43.050717   77242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0205 03:30:43.075229   77242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0205 03:30:43.103011   77242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0205 03:30:43.127446   77242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/bridge-253147/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0205 03:30:43.150497   77242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0205 03:30:43.173604   77242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/certs/19989.pem --> /usr/share/ca-certificates/19989.pem (1338 bytes)
	I0205 03:30:43.195689   77242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/files/etc/ssl/certs/199892.pem --> /usr/share/ca-certificates/199892.pem (1708 bytes)
	I0205 03:30:43.218757   77242 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0205 03:30:43.237177   77242 ssh_runner.go:195] Run: openssl version
	I0205 03:30:43.242897   77242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/199892.pem && ln -fs /usr/share/ca-certificates/199892.pem /etc/ssl/certs/199892.pem"
	I0205 03:30:43.254768   77242 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/199892.pem
	I0205 03:30:43.259295   77242 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb  5 02:11 /usr/share/ca-certificates/199892.pem
	I0205 03:30:43.259333   77242 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/199892.pem
	I0205 03:30:43.265475   77242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/199892.pem /etc/ssl/certs/3ec20f2e.0"
	I0205 03:30:43.276137   77242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0205 03:30:43.286900   77242 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0205 03:30:43.291399   77242 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb  5 02:04 /usr/share/ca-certificates/minikubeCA.pem
	I0205 03:30:43.291460   77242 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0205 03:30:43.297115   77242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0205 03:30:43.309405   77242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19989.pem && ln -fs /usr/share/ca-certificates/19989.pem /etc/ssl/certs/19989.pem"
	I0205 03:30:43.319635   77242 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19989.pem
	I0205 03:30:43.323886   77242 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb  5 02:11 /usr/share/ca-certificates/19989.pem
	I0205 03:30:43.323936   77242 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19989.pem
	I0205 03:30:43.329782   77242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/19989.pem /etc/ssl/certs/51391683.0"
	I0205 03:30:43.340548   77242 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0205 03:30:43.344485   77242 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0205 03:30:43.344534   77242 kubeadm.go:392] StartCluster: {Name:bridge-253147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-253147 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.50.246 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0205 03:30:43.344602   77242 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0205 03:30:43.344657   77242 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0205 03:30:43.381149   77242 cri.go:89] found id: ""
	I0205 03:30:43.381213   77242 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0205 03:30:43.391256   77242 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0205 03:30:43.400986   77242 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0205 03:30:43.410378   77242 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0205 03:30:43.410396   77242 kubeadm.go:157] found existing configuration files:
	
	I0205 03:30:43.410431   77242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0205 03:30:43.420609   77242 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0205 03:30:43.420667   77242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0205 03:30:43.430573   77242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0205 03:30:43.440182   77242 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0205 03:30:43.440244   77242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0205 03:30:43.449936   77242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0205 03:30:43.459622   77242 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0205 03:30:43.459680   77242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0205 03:30:43.469604   77242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0205 03:30:43.478500   77242 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0205 03:30:43.478562   77242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0205 03:30:43.487477   77242 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0205 03:30:43.546419   77242 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0205 03:30:43.546512   77242 kubeadm.go:310] [preflight] Running pre-flight checks
	I0205 03:30:43.649107   77242 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0205 03:30:43.649286   77242 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0205 03:30:43.649477   77242 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0205 03:30:43.658297   77242 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0205 03:30:43.787428   77242 out.go:235]   - Generating certificates and keys ...
	I0205 03:30:43.787557   77242 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0205 03:30:43.787670   77242 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0205 03:30:43.842289   77242 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0205 03:30:43.993164   77242 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0205 03:30:44.119322   77242 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0205 03:30:44.302079   77242 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0205 03:30:44.425710   77242 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0205 03:30:44.425881   77242 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [bridge-253147 localhost] and IPs [192.168.50.246 127.0.0.1 ::1]
	I0205 03:30:44.571842   77242 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0205 03:30:44.572049   77242 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [bridge-253147 localhost] and IPs [192.168.50.246 127.0.0.1 ::1]
	I0205 03:30:44.694047   77242 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0205 03:30:44.746113   77242 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0205 03:30:44.857769   77242 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0205 03:30:44.857851   77242 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0205 03:30:45.173228   77242 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0205 03:30:45.327637   77242 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0205 03:30:45.561572   77242 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0205 03:30:45.829713   77242 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0205 03:30:45.971023   77242 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0205 03:30:45.971651   77242 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0205 03:30:45.974124   77242 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0205 03:30:45.976752   77242 out.go:235]   - Booting up control plane ...
	I0205 03:30:45.976870   77242 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0205 03:30:45.976949   77242 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0205 03:30:45.977022   77242 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0205 03:30:45.997190   77242 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0205 03:30:46.005270   77242 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0205 03:30:46.005384   77242 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0205 03:30:46.127130   77242 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0205 03:30:46.127269   77242 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0205 03:30:44.111979   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:30:44.112507   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | unable to find current IP address of domain enable-default-cni-253147 in network mk-enable-default-cni-253147
	I0205 03:30:44.112555   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | I0205 03:30:44.112499   79034 retry.go:31] will retry after 1.790961133s: waiting for domain to come up
	I0205 03:30:45.905133   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:30:45.905720   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | unable to find current IP address of domain enable-default-cni-253147 in network mk-enable-default-cni-253147
	I0205 03:30:45.905748   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | I0205 03:30:45.905692   79034 retry.go:31] will retry after 1.666031127s: waiting for domain to come up
	I0205 03:30:47.573282   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:30:47.573933   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | unable to find current IP address of domain enable-default-cni-253147 in network mk-enable-default-cni-253147
	I0205 03:30:47.573968   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | I0205 03:30:47.573886   79034 retry.go:31] will retry after 1.867135722s: waiting for domain to come up
	I0205 03:30:47.128625   77242 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001366822s
	I0205 03:30:47.128739   77242 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0205 03:30:51.630336   77242 kubeadm.go:310] [api-check] The API server is healthy after 4.501384276s
	I0205 03:30:51.643882   77242 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0205 03:30:52.160875   77242 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0205 03:30:52.194216   77242 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0205 03:30:52.194481   77242 kubeadm.go:310] [mark-control-plane] Marking the node bridge-253147 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0205 03:30:52.204470   77242 kubeadm.go:310] [bootstrap-token] Using token: cylh84.xficas9ll5cpdlvf
	I0205 03:30:49.444153   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:30:49.444687   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | unable to find current IP address of domain enable-default-cni-253147 in network mk-enable-default-cni-253147
	I0205 03:30:49.444714   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | I0205 03:30:49.444635   79034 retry.go:31] will retry after 2.913102259s: waiting for domain to come up
	I0205 03:30:52.359492   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:30:52.360086   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | unable to find current IP address of domain enable-default-cni-253147 in network mk-enable-default-cni-253147
	I0205 03:30:52.360115   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | I0205 03:30:52.360057   79034 retry.go:31] will retry after 4.239584755s: waiting for domain to come up
	I0205 03:30:52.205969   77242 out.go:235]   - Configuring RBAC rules ...
	I0205 03:30:52.206118   77242 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0205 03:30:52.212375   77242 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0205 03:30:52.218595   77242 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0205 03:30:52.221987   77242 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0205 03:30:52.225513   77242 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0205 03:30:52.231498   77242 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0205 03:30:52.355302   77242 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0205 03:30:52.792077   77242 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0205 03:30:53.360431   77242 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0205 03:30:53.360467   77242 kubeadm.go:310] 
	I0205 03:30:53.360595   77242 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0205 03:30:53.360622   77242 kubeadm.go:310] 
	I0205 03:30:53.360747   77242 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0205 03:30:53.360762   77242 kubeadm.go:310] 
	I0205 03:30:53.360805   77242 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0205 03:30:53.360886   77242 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0205 03:30:53.360965   77242 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0205 03:30:53.360977   77242 kubeadm.go:310] 
	I0205 03:30:53.361085   77242 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0205 03:30:53.361103   77242 kubeadm.go:310] 
	I0205 03:30:53.361154   77242 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0205 03:30:53.361174   77242 kubeadm.go:310] 
	I0205 03:30:53.361233   77242 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0205 03:30:53.361360   77242 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0205 03:30:53.361462   77242 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0205 03:30:53.361470   77242 kubeadm.go:310] 
	I0205 03:30:53.361571   77242 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0205 03:30:53.361685   77242 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0205 03:30:53.361694   77242 kubeadm.go:310] 
	I0205 03:30:53.361795   77242 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token cylh84.xficas9ll5cpdlvf \
	I0205 03:30:53.361931   77242 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d4171302552bdc887375c8f8e3004ed013421bed3bb373f5cbe18dbecc68169 \
	I0205 03:30:53.361962   77242 kubeadm.go:310] 	--control-plane 
	I0205 03:30:53.361979   77242 kubeadm.go:310] 
	I0205 03:30:53.362083   77242 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0205 03:30:53.362097   77242 kubeadm.go:310] 
	I0205 03:30:53.362206   77242 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token cylh84.xficas9ll5cpdlvf \
	I0205 03:30:53.362326   77242 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d4171302552bdc887375c8f8e3004ed013421bed3bb373f5cbe18dbecc68169 
	I0205 03:30:53.362602   77242 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0205 03:30:53.362732   77242 cni.go:84] Creating CNI manager for "bridge"
	I0205 03:30:53.364286   77242 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0205 03:30:53.365521   77242 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0205 03:30:53.380070   77242 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0205 03:30:53.397428   77242 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0205 03:30:53.397528   77242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 03:30:53.397539   77242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-253147 minikube.k8s.io/updated_at=2025_02_05T03_30_53_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=e14c106b6cd223b80b8f7425af26284a33f75c3d minikube.k8s.io/name=bridge-253147 minikube.k8s.io/primary=true
	I0205 03:30:53.516425   77242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 03:30:53.550197   77242 ops.go:34] apiserver oom_adj: -16
	I0205 03:30:54.016512   77242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 03:30:54.516802   77242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 03:30:55.016599   77242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 03:30:55.516750   77242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 03:30:56.017128   77242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 03:30:56.516827   77242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 03:30:57.017214   77242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 03:30:57.124482   77242 kubeadm.go:1113] duration metric: took 3.727019948s to wait for elevateKubeSystemPrivileges
	I0205 03:30:57.124523   77242 kubeadm.go:394] duration metric: took 13.779991885s to StartCluster
	I0205 03:30:57.124540   77242 settings.go:142] acquiring lock: {Name:mk2eca847da5ba78f5b041a83e5cfcbdebb0c621 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:30:57.124619   77242 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20363-12788/kubeconfig
	I0205 03:30:57.125807   77242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/kubeconfig: {Name:mkb405c4292c681fd728af1af9684132d7e45754 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:30:57.126067   77242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0205 03:30:57.126069   77242 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.50.246 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0205 03:30:57.126143   77242 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0205 03:30:57.126263   77242 config.go:182] Loaded profile config "bridge-253147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 03:30:57.126269   77242 addons.go:69] Setting storage-provisioner=true in profile "bridge-253147"
	I0205 03:30:57.126285   77242 addons.go:69] Setting default-storageclass=true in profile "bridge-253147"
	I0205 03:30:57.126336   77242 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-253147"
	I0205 03:30:57.126289   77242 addons.go:238] Setting addon storage-provisioner=true in "bridge-253147"
	I0205 03:30:57.126431   77242 host.go:66] Checking if "bridge-253147" exists ...
	I0205 03:30:57.126791   77242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:30:57.126821   77242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:30:57.126827   77242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:30:57.126870   77242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:30:57.128477   77242 out.go:177] * Verifying Kubernetes components...
	I0205 03:30:57.129687   77242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0205 03:30:57.142273   77242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43155
	I0205 03:30:57.142285   77242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38755
	I0205 03:30:57.142713   77242 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:30:57.142745   77242 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:30:57.143234   77242 main.go:141] libmachine: Using API Version  1
	I0205 03:30:57.143236   77242 main.go:141] libmachine: Using API Version  1
	I0205 03:30:57.143257   77242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:30:57.143274   77242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:30:57.143634   77242 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:30:57.143692   77242 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:30:57.143929   77242 main.go:141] libmachine: (bridge-253147) Calling .GetState
	I0205 03:30:57.144286   77242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:30:57.144329   77242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:30:57.147278   77242 addons.go:238] Setting addon default-storageclass=true in "bridge-253147"
	I0205 03:30:57.147316   77242 host.go:66] Checking if "bridge-253147" exists ...
	I0205 03:30:57.147680   77242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:30:57.147729   77242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:30:57.160541   77242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46319
	I0205 03:30:57.160988   77242 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:30:57.161624   77242 main.go:141] libmachine: Using API Version  1
	I0205 03:30:57.161653   77242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:30:57.162040   77242 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:30:57.162245   77242 main.go:141] libmachine: (bridge-253147) Calling .GetState
	I0205 03:30:57.162579   77242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32957
	I0205 03:30:57.162951   77242 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:30:57.163339   77242 main.go:141] libmachine: Using API Version  1
	I0205 03:30:57.163362   77242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:30:57.163669   77242 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:30:57.164316   77242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:30:57.164359   77242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:30:57.164577   77242 main.go:141] libmachine: (bridge-253147) Calling .DriverName
	I0205 03:30:57.166160   77242 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0205 03:30:57.167311   77242 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0205 03:30:57.167328   77242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0205 03:30:57.167341   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHHostname
	I0205 03:30:57.170492   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:57.170949   77242 main.go:141] libmachine: (bridge-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:4a:f9", ip: ""} in network mk-bridge-253147: {Iface:virbr1 ExpiryTime:2025-02-05 04:30:27 +0000 UTC Type:0 Mac:52:54:00:8f:4a:f9 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:bridge-253147 Clientid:01:52:54:00:8f:4a:f9}
	I0205 03:30:57.170979   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined IP address 192.168.50.246 and MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:57.171259   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHPort
	I0205 03:30:57.171440   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHKeyPath
	I0205 03:30:57.171584   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHUsername
	I0205 03:30:57.171739   77242 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/bridge-253147/id_rsa Username:docker}
	I0205 03:30:57.180157   77242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38219
	I0205 03:30:57.180673   77242 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:30:57.181154   77242 main.go:141] libmachine: Using API Version  1
	I0205 03:30:57.181177   77242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:30:57.181558   77242 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:30:57.181780   77242 main.go:141] libmachine: (bridge-253147) Calling .GetState
	I0205 03:30:57.183306   77242 main.go:141] libmachine: (bridge-253147) Calling .DriverName
	I0205 03:30:57.183531   77242 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0205 03:30:57.183546   77242 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0205 03:30:57.183562   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHHostname
	I0205 03:30:57.186167   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:57.186542   77242 main.go:141] libmachine: (bridge-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:4a:f9", ip: ""} in network mk-bridge-253147: {Iface:virbr1 ExpiryTime:2025-02-05 04:30:27 +0000 UTC Type:0 Mac:52:54:00:8f:4a:f9 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:bridge-253147 Clientid:01:52:54:00:8f:4a:f9}
	I0205 03:30:57.186570   77242 main.go:141] libmachine: (bridge-253147) DBG | domain bridge-253147 has defined IP address 192.168.50.246 and MAC address 52:54:00:8f:4a:f9 in network mk-bridge-253147
	I0205 03:30:57.186722   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHPort
	I0205 03:30:57.186925   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHKeyPath
	I0205 03:30:57.187092   77242 main.go:141] libmachine: (bridge-253147) Calling .GetSSHUsername
	I0205 03:30:57.187223   77242 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/bridge-253147/id_rsa Username:docker}
	I0205 03:30:57.353736   77242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0205 03:30:57.353846   77242 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0205 03:30:57.377831   77242 node_ready.go:35] waiting up to 15m0s for node "bridge-253147" to be "Ready" ...
	I0205 03:30:57.414576   77242 node_ready.go:49] node "bridge-253147" has status "Ready":"True"
	I0205 03:30:57.414599   77242 node_ready.go:38] duration metric: took 36.726589ms for node "bridge-253147" to be "Ready" ...
	I0205 03:30:57.414609   77242 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0205 03:30:57.434337   77242 pod_ready.go:79] waiting up to 15m0s for pod "etcd-bridge-253147" in "kube-system" namespace to be "Ready" ...
	I0205 03:30:57.501961   77242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0205 03:30:57.530380   77242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0205 03:30:57.754080   77242 start.go:971] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0205 03:30:57.802527   77242 main.go:141] libmachine: Making call to close driver server
	I0205 03:30:57.802558   77242 main.go:141] libmachine: (bridge-253147) Calling .Close
	I0205 03:30:57.802869   77242 main.go:141] libmachine: Successfully made call to close driver server
	I0205 03:30:57.802889   77242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 03:30:57.802900   77242 main.go:141] libmachine: Making call to close driver server
	I0205 03:30:57.802909   77242 main.go:141] libmachine: (bridge-253147) Calling .Close
	I0205 03:30:57.803171   77242 main.go:141] libmachine: Successfully made call to close driver server
	I0205 03:30:57.803178   77242 main.go:141] libmachine: (bridge-253147) DBG | Closing plugin on server side
	I0205 03:30:57.803187   77242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 03:30:57.818026   77242 main.go:141] libmachine: Making call to close driver server
	I0205 03:30:57.818048   77242 main.go:141] libmachine: (bridge-253147) Calling .Close
	I0205 03:30:57.818320   77242 main.go:141] libmachine: (bridge-253147) DBG | Closing plugin on server side
	I0205 03:30:57.818383   77242 main.go:141] libmachine: Successfully made call to close driver server
	I0205 03:30:57.818396   77242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 03:30:58.036646   77242 main.go:141] libmachine: Making call to close driver server
	I0205 03:30:58.036669   77242 main.go:141] libmachine: (bridge-253147) Calling .Close
	I0205 03:30:58.036933   77242 main.go:141] libmachine: Successfully made call to close driver server
	I0205 03:30:58.036945   77242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 03:30:58.036953   77242 main.go:141] libmachine: Making call to close driver server
	I0205 03:30:58.036959   77242 main.go:141] libmachine: (bridge-253147) Calling .Close
	I0205 03:30:58.037451   77242 main.go:141] libmachine: (bridge-253147) DBG | Closing plugin on server side
	I0205 03:30:58.037460   77242 main.go:141] libmachine: Successfully made call to close driver server
	I0205 03:30:58.037478   77242 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 03:30:58.038806   77242 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0205 03:30:56.600932   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:30:56.601508   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | unable to find current IP address of domain enable-default-cni-253147 in network mk-enable-default-cni-253147
	I0205 03:30:56.601541   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | I0205 03:30:56.601477   79034 retry.go:31] will retry after 3.74327237s: waiting for domain to come up
	I0205 03:30:58.039791   77242 addons.go:514] duration metric: took 913.647604ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0205 03:30:58.259520   77242 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-253147" context rescaled to 1 replicas
	I0205 03:30:59.439609   77242 pod_ready.go:103] pod "etcd-bridge-253147" in "kube-system" namespace has status "Ready":"False"
	I0205 03:31:01.440073   77242 pod_ready.go:103] pod "etcd-bridge-253147" in "kube-system" namespace has status "Ready":"False"
	I0205 03:31:00.349372   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:00.349816   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has current primary IP address 192.168.72.143 and MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:00.349855   77491 main.go:141] libmachine: (enable-default-cni-253147) found domain IP: 192.168.72.143
	I0205 03:31:00.349879   77491 main.go:141] libmachine: (enable-default-cni-253147) reserving static IP address...
	I0205 03:31:00.350085   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | unable to find host DHCP lease matching {name: "enable-default-cni-253147", mac: "52:54:00:f2:b1:0a", ip: "192.168.72.143"} in network mk-enable-default-cni-253147
	I0205 03:31:00.427216   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | Getting to WaitForSSH function...
	I0205 03:31:00.427258   77491 main.go:141] libmachine: (enable-default-cni-253147) reserved static IP address 192.168.72.143 for domain enable-default-cni-253147
	I0205 03:31:00.427290   77491 main.go:141] libmachine: (enable-default-cni-253147) waiting for SSH...
	I0205 03:31:00.429895   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:00.430236   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:f2:b1:0a", ip: ""} in network mk-enable-default-cni-253147
	I0205 03:31:00.430266   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | unable to find defined IP address of network mk-enable-default-cni-253147 interface with MAC address 52:54:00:f2:b1:0a
	I0205 03:31:00.430425   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | Using SSH client type: external
	I0205 03:31:00.430455   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | Using SSH private key: /home/jenkins/minikube-integration/20363-12788/.minikube/machines/enable-default-cni-253147/id_rsa (-rw-------)
	I0205 03:31:00.430493   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20363-12788/.minikube/machines/enable-default-cni-253147/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0205 03:31:00.430517   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | About to run SSH command:
	I0205 03:31:00.430543   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | exit 0
	I0205 03:31:00.434282   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | SSH cmd err, output: exit status 255: 
	I0205 03:31:00.434311   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0205 03:31:00.434322   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | command : exit 0
	I0205 03:31:00.434334   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | err     : exit status 255
	I0205 03:31:00.434347   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | output  : 
	I0205 03:31:02.439676   77242 pod_ready.go:93] pod "etcd-bridge-253147" in "kube-system" namespace has status "Ready":"True"
	I0205 03:31:02.439701   77242 pod_ready.go:82] duration metric: took 5.005335594s for pod "etcd-bridge-253147" in "kube-system" namespace to be "Ready" ...
	I0205 03:31:02.439711   77242 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-bridge-253147" in "kube-system" namespace to be "Ready" ...
	I0205 03:31:02.443180   77242 pod_ready.go:93] pod "kube-apiserver-bridge-253147" in "kube-system" namespace has status "Ready":"True"
	I0205 03:31:02.443200   77242 pod_ready.go:82] duration metric: took 3.483384ms for pod "kube-apiserver-bridge-253147" in "kube-system" namespace to be "Ready" ...
	I0205 03:31:02.443208   77242 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-bridge-253147" in "kube-system" namespace to be "Ready" ...
	I0205 03:31:04.449470   77242 pod_ready.go:103] pod "kube-controller-manager-bridge-253147" in "kube-system" namespace has status "Ready":"False"
	I0205 03:31:04.949016   77242 pod_ready.go:93] pod "kube-controller-manager-bridge-253147" in "kube-system" namespace has status "Ready":"True"
	I0205 03:31:04.949040   77242 pod_ready.go:82] duration metric: took 2.505824176s for pod "kube-controller-manager-bridge-253147" in "kube-system" namespace to be "Ready" ...
	I0205 03:31:04.949054   77242 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-tznhk" in "kube-system" namespace to be "Ready" ...
	I0205 03:31:04.953268   77242 pod_ready.go:93] pod "kube-proxy-tznhk" in "kube-system" namespace has status "Ready":"True"
	I0205 03:31:04.953291   77242 pod_ready.go:82] duration metric: took 4.228529ms for pod "kube-proxy-tznhk" in "kube-system" namespace to be "Ready" ...
	I0205 03:31:04.953302   77242 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-bridge-253147" in "kube-system" namespace to be "Ready" ...
	I0205 03:31:04.956495   77242 pod_ready.go:93] pod "kube-scheduler-bridge-253147" in "kube-system" namespace has status "Ready":"True"
	I0205 03:31:04.956514   77242 pod_ready.go:82] duration metric: took 3.2049ms for pod "kube-scheduler-bridge-253147" in "kube-system" namespace to be "Ready" ...
	I0205 03:31:04.956524   77242 pod_ready.go:39] duration metric: took 7.541903694s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0205 03:31:04.956542   77242 api_server.go:52] waiting for apiserver process to appear ...
	I0205 03:31:04.956595   77242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:31:04.972420   77242 api_server.go:72] duration metric: took 7.846318586s to wait for apiserver process to appear ...
	I0205 03:31:04.972448   77242 api_server.go:88] waiting for apiserver healthz status ...
	I0205 03:31:04.972468   77242 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I0205 03:31:04.977100   77242 api_server.go:279] https://192.168.50.246:8443/healthz returned 200:
	ok
	I0205 03:31:04.978247   77242 api_server.go:141] control plane version: v1.32.1
	I0205 03:31:04.978277   77242 api_server.go:131] duration metric: took 5.821325ms to wait for apiserver health ...
	I0205 03:31:04.978287   77242 system_pods.go:43] waiting for kube-system pods to appear ...
	I0205 03:31:04.980794   77242 system_pods.go:59] 7 kube-system pods found
	I0205 03:31:04.980825   77242 system_pods.go:61] "coredns-668d6bf9bc-w4q4d" [a2c00545-1eec-40a6-b4c6-0496a18806e4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0205 03:31:04.980831   77242 system_pods.go:61] "etcd-bridge-253147" [f40b76f0-91f6-42de-b983-960541a36f7f] Running
	I0205 03:31:04.980836   77242 system_pods.go:61] "kube-apiserver-bridge-253147" [3b9d175a-2a6b-40f4-b941-e34a2c8dd770] Running
	I0205 03:31:04.980840   77242 system_pods.go:61] "kube-controller-manager-bridge-253147" [7f885e6d-46f5-4db7-983f-0ff5cc6fe11e] Running
	I0205 03:31:04.980844   77242 system_pods.go:61] "kube-proxy-tznhk" [25ee03b7-9305-4158-acea-769f9f5c3e80] Running
	I0205 03:31:04.980847   77242 system_pods.go:61] "kube-scheduler-bridge-253147" [3e6c7848-d410-4132-b8fa-ec9298afbafb] Running
	I0205 03:31:04.980850   77242 system_pods.go:61] "storage-provisioner" [0cc7c11d-e735-4916-9fab-0f7be7596b7b] Running
	I0205 03:31:04.980855   77242 system_pods.go:74] duration metric: took 2.562597ms to wait for pod list to return data ...
	I0205 03:31:04.980862   77242 default_sa.go:34] waiting for default service account to be created ...
	I0205 03:31:04.982923   77242 default_sa.go:45] found service account: "default"
	I0205 03:31:04.982942   77242 default_sa.go:55] duration metric: took 2.0718ms for default service account to be created ...
	I0205 03:31:04.982952   77242 system_pods.go:116] waiting for k8s-apps to be running ...
	I0205 03:31:05.038759   77242 system_pods.go:86] 7 kube-system pods found
	I0205 03:31:05.038796   77242 system_pods.go:89] "coredns-668d6bf9bc-w4q4d" [a2c00545-1eec-40a6-b4c6-0496a18806e4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0205 03:31:05.038806   77242 system_pods.go:89] "etcd-bridge-253147" [f40b76f0-91f6-42de-b983-960541a36f7f] Running
	I0205 03:31:05.038821   77242 system_pods.go:89] "kube-apiserver-bridge-253147" [3b9d175a-2a6b-40f4-b941-e34a2c8dd770] Running
	I0205 03:31:05.038830   77242 system_pods.go:89] "kube-controller-manager-bridge-253147" [7f885e6d-46f5-4db7-983f-0ff5cc6fe11e] Running
	I0205 03:31:05.038836   77242 system_pods.go:89] "kube-proxy-tznhk" [25ee03b7-9305-4158-acea-769f9f5c3e80] Running
	I0205 03:31:05.038842   77242 system_pods.go:89] "kube-scheduler-bridge-253147" [3e6c7848-d410-4132-b8fa-ec9298afbafb] Running
	I0205 03:31:05.038850   77242 system_pods.go:89] "storage-provisioner" [0cc7c11d-e735-4916-9fab-0f7be7596b7b] Running
	I0205 03:31:05.038858   77242 system_pods.go:126] duration metric: took 55.900127ms to wait for k8s-apps to be running ...
	I0205 03:31:05.038870   77242 system_svc.go:44] waiting for kubelet service to be running ....
	I0205 03:31:05.038916   77242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0205 03:31:05.058257   77242 system_svc.go:56] duration metric: took 19.375476ms WaitForService to wait for kubelet
	I0205 03:31:05.058293   77242 kubeadm.go:582] duration metric: took 7.932197543s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0205 03:31:05.058311   77242 node_conditions.go:102] verifying NodePressure condition ...
	I0205 03:31:05.239036   77242 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0205 03:31:05.239071   77242 node_conditions.go:123] node cpu capacity is 2
	I0205 03:31:05.239087   77242 node_conditions.go:105] duration metric: took 180.769579ms to run NodePressure ...
	I0205 03:31:05.239101   77242 start.go:241] waiting for startup goroutines ...
	I0205 03:31:05.239112   77242 start.go:246] waiting for cluster config update ...
	I0205 03:31:05.239124   77242 start.go:255] writing updated cluster config ...
	I0205 03:31:05.239493   77242 ssh_runner.go:195] Run: rm -f paused
	I0205 03:31:05.294017   77242 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0205 03:31:05.296746   77242 out.go:177] * Done! kubectl is now configured to use "bridge-253147" cluster and "default" namespace by default
	I0205 03:31:03.434515   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | Getting to WaitForSSH function...
	I0205 03:31:03.436973   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:03.437300   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:b1:0a", ip: ""} in network mk-enable-default-cni-253147: {Iface:virbr4 ExpiryTime:2025-02-05 04:30:52 +0000 UTC Type:0 Mac:52:54:00:f2:b1:0a Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:enable-default-cni-253147 Clientid:01:52:54:00:f2:b1:0a}
	I0205 03:31:03.437328   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined IP address 192.168.72.143 and MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:03.437488   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | Using SSH client type: external
	I0205 03:31:03.437517   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | Using SSH private key: /home/jenkins/minikube-integration/20363-12788/.minikube/machines/enable-default-cni-253147/id_rsa (-rw-------)
	I0205 03:31:03.437552   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.143 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20363-12788/.minikube/machines/enable-default-cni-253147/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0205 03:31:03.437566   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | About to run SSH command:
	I0205 03:31:03.437582   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | exit 0
	I0205 03:31:03.569720   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | SSH cmd err, output: <nil>: 
	I0205 03:31:03.570019   77491 main.go:141] libmachine: (enable-default-cni-253147) KVM machine creation complete
	I0205 03:31:03.570398   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetConfigRaw
	I0205 03:31:03.571050   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .DriverName
	I0205 03:31:03.571248   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .DriverName
	I0205 03:31:03.571394   77491 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0205 03:31:03.571410   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetState
	I0205 03:31:03.572652   77491 main.go:141] libmachine: Detecting operating system of created instance...
	I0205 03:31:03.572671   77491 main.go:141] libmachine: Waiting for SSH to be available...
	I0205 03:31:03.572678   77491 main.go:141] libmachine: Getting to WaitForSSH function...
	I0205 03:31:03.572687   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHHostname
	I0205 03:31:03.574885   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:03.575234   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:b1:0a", ip: ""} in network mk-enable-default-cni-253147: {Iface:virbr4 ExpiryTime:2025-02-05 04:30:52 +0000 UTC Type:0 Mac:52:54:00:f2:b1:0a Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:enable-default-cni-253147 Clientid:01:52:54:00:f2:b1:0a}
	I0205 03:31:03.575265   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined IP address 192.168.72.143 and MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:03.575429   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHPort
	I0205 03:31:03.575627   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHKeyPath
	I0205 03:31:03.575781   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHKeyPath
	I0205 03:31:03.575898   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHUsername
	I0205 03:31:03.576046   77491 main.go:141] libmachine: Using SSH client type: native
	I0205 03:31:03.576235   77491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.143 22 <nil> <nil>}
	I0205 03:31:03.576246   77491 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0205 03:31:03.688735   77491 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0205 03:31:03.688761   77491 main.go:141] libmachine: Detecting the provisioner...
	I0205 03:31:03.688784   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHHostname
	I0205 03:31:03.691744   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:03.692124   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:b1:0a", ip: ""} in network mk-enable-default-cni-253147: {Iface:virbr4 ExpiryTime:2025-02-05 04:30:52 +0000 UTC Type:0 Mac:52:54:00:f2:b1:0a Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:enable-default-cni-253147 Clientid:01:52:54:00:f2:b1:0a}
	I0205 03:31:03.692171   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined IP address 192.168.72.143 and MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:03.692296   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHPort
	I0205 03:31:03.692475   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHKeyPath
	I0205 03:31:03.692610   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHKeyPath
	I0205 03:31:03.692728   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHUsername
	I0205 03:31:03.692870   77491 main.go:141] libmachine: Using SSH client type: native
	I0205 03:31:03.693036   77491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.143 22 <nil> <nil>}
	I0205 03:31:03.693047   77491 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0205 03:31:03.806008   77491 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0205 03:31:03.806079   77491 main.go:141] libmachine: found compatible host: buildroot
	I0205 03:31:03.806085   77491 main.go:141] libmachine: Provisioning with buildroot...
	I0205 03:31:03.806092   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetMachineName
	I0205 03:31:03.806338   77491 buildroot.go:166] provisioning hostname "enable-default-cni-253147"
	I0205 03:31:03.806365   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetMachineName
	I0205 03:31:03.806532   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHHostname
	I0205 03:31:03.809230   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:03.809596   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:b1:0a", ip: ""} in network mk-enable-default-cni-253147: {Iface:virbr4 ExpiryTime:2025-02-05 04:30:52 +0000 UTC Type:0 Mac:52:54:00:f2:b1:0a Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:enable-default-cni-253147 Clientid:01:52:54:00:f2:b1:0a}
	I0205 03:31:03.809630   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined IP address 192.168.72.143 and MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:03.809771   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHPort
	I0205 03:31:03.809956   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHKeyPath
	I0205 03:31:03.810078   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHKeyPath
	I0205 03:31:03.810257   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHUsername
	I0205 03:31:03.810421   77491 main.go:141] libmachine: Using SSH client type: native
	I0205 03:31:03.810633   77491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.143 22 <nil> <nil>}
	I0205 03:31:03.810647   77491 main.go:141] libmachine: About to run SSH command:
	sudo hostname enable-default-cni-253147 && echo "enable-default-cni-253147" | sudo tee /etc/hostname
	I0205 03:31:03.941615   77491 main.go:141] libmachine: SSH cmd err, output: <nil>: enable-default-cni-253147
	
	I0205 03:31:03.941643   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHHostname
	I0205 03:31:03.944752   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:03.945135   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:b1:0a", ip: ""} in network mk-enable-default-cni-253147: {Iface:virbr4 ExpiryTime:2025-02-05 04:30:52 +0000 UTC Type:0 Mac:52:54:00:f2:b1:0a Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:enable-default-cni-253147 Clientid:01:52:54:00:f2:b1:0a}
	I0205 03:31:03.945170   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined IP address 192.168.72.143 and MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:03.945409   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHPort
	I0205 03:31:03.945623   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHKeyPath
	I0205 03:31:03.945809   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHKeyPath
	I0205 03:31:03.945969   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHUsername
	I0205 03:31:03.946143   77491 main.go:141] libmachine: Using SSH client type: native
	I0205 03:31:03.946376   77491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.143 22 <nil> <nil>}
	I0205 03:31:03.946413   77491 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\senable-default-cni-253147' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 enable-default-cni-253147/g' /etc/hosts;
				else 
					echo '127.0.1.1 enable-default-cni-253147' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0205 03:31:04.069933   77491 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0205 03:31:04.069966   77491 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20363-12788/.minikube CaCertPath:/home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20363-12788/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20363-12788/.minikube}
	I0205 03:31:04.070034   77491 buildroot.go:174] setting up certificates
	I0205 03:31:04.070049   77491 provision.go:84] configureAuth start
	I0205 03:31:04.070065   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetMachineName
	I0205 03:31:04.070383   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetIP
	I0205 03:31:04.073266   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:04.073601   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:b1:0a", ip: ""} in network mk-enable-default-cni-253147: {Iface:virbr4 ExpiryTime:2025-02-05 04:30:52 +0000 UTC Type:0 Mac:52:54:00:f2:b1:0a Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:enable-default-cni-253147 Clientid:01:52:54:00:f2:b1:0a}
	I0205 03:31:04.073628   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined IP address 192.168.72.143 and MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:04.073751   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHHostname
	I0205 03:31:04.076116   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:04.076479   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:b1:0a", ip: ""} in network mk-enable-default-cni-253147: {Iface:virbr4 ExpiryTime:2025-02-05 04:30:52 +0000 UTC Type:0 Mac:52:54:00:f2:b1:0a Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:enable-default-cni-253147 Clientid:01:52:54:00:f2:b1:0a}
	I0205 03:31:04.076514   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined IP address 192.168.72.143 and MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:04.076673   77491 provision.go:143] copyHostCerts
	I0205 03:31:04.076745   77491 exec_runner.go:144] found /home/jenkins/minikube-integration/20363-12788/.minikube/ca.pem, removing ...
	I0205 03:31:04.076762   77491 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20363-12788/.minikube/ca.pem
	I0205 03:31:04.076827   77491 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20363-12788/.minikube/ca.pem (1082 bytes)
	I0205 03:31:04.076947   77491 exec_runner.go:144] found /home/jenkins/minikube-integration/20363-12788/.minikube/cert.pem, removing ...
	I0205 03:31:04.076958   77491 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20363-12788/.minikube/cert.pem
	I0205 03:31:04.076995   77491 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20363-12788/.minikube/cert.pem (1123 bytes)
	I0205 03:31:04.077070   77491 exec_runner.go:144] found /home/jenkins/minikube-integration/20363-12788/.minikube/key.pem, removing ...
	I0205 03:31:04.077081   77491 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20363-12788/.minikube/key.pem
	I0205 03:31:04.077109   77491 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20363-12788/.minikube/key.pem (1675 bytes)
	I0205 03:31:04.077217   77491 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20363-12788/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca-key.pem org=jenkins.enable-default-cni-253147 san=[127.0.0.1 192.168.72.143 enable-default-cni-253147 localhost minikube]
	I0205 03:31:04.251531   77491 provision.go:177] copyRemoteCerts
	I0205 03:31:04.251605   77491 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0205 03:31:04.251640   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHHostname
	I0205 03:31:04.254390   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:04.254683   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:b1:0a", ip: ""} in network mk-enable-default-cni-253147: {Iface:virbr4 ExpiryTime:2025-02-05 04:30:52 +0000 UTC Type:0 Mac:52:54:00:f2:b1:0a Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:enable-default-cni-253147 Clientid:01:52:54:00:f2:b1:0a}
	I0205 03:31:04.254732   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined IP address 192.168.72.143 and MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:04.254918   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHPort
	I0205 03:31:04.255115   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHKeyPath
	I0205 03:31:04.255296   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHUsername
	I0205 03:31:04.255435   77491 sshutil.go:53] new ssh client: &{IP:192.168.72.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/enable-default-cni-253147/id_rsa Username:docker}
	I0205 03:31:04.343524   77491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0205 03:31:04.371060   77491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0205 03:31:04.398285   77491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0205 03:31:04.424766   77491 provision.go:87] duration metric: took 354.702782ms to configureAuth
	I0205 03:31:04.424795   77491 buildroot.go:189] setting minikube options for container-runtime
	I0205 03:31:04.424952   77491 config.go:182] Loaded profile config "enable-default-cni-253147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 03:31:04.425016   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHHostname
	I0205 03:31:04.427625   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:04.427918   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:b1:0a", ip: ""} in network mk-enable-default-cni-253147: {Iface:virbr4 ExpiryTime:2025-02-05 04:30:52 +0000 UTC Type:0 Mac:52:54:00:f2:b1:0a Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:enable-default-cni-253147 Clientid:01:52:54:00:f2:b1:0a}
	I0205 03:31:04.427941   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined IP address 192.168.72.143 and MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:04.428113   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHPort
	I0205 03:31:04.428367   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHKeyPath
	I0205 03:31:04.428554   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHKeyPath
	I0205 03:31:04.428699   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHUsername
	I0205 03:31:04.428855   77491 main.go:141] libmachine: Using SSH client type: native
	I0205 03:31:04.429035   77491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.143 22 <nil> <nil>}
	I0205 03:31:04.429053   77491 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0205 03:31:04.669877   77491 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0205 03:31:04.669925   77491 main.go:141] libmachine: Checking connection to Docker...
	I0205 03:31:04.669936   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetURL
	I0205 03:31:04.671447   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | using libvirt version 6000000
	I0205 03:31:04.673878   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:04.674280   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:b1:0a", ip: ""} in network mk-enable-default-cni-253147: {Iface:virbr4 ExpiryTime:2025-02-05 04:30:52 +0000 UTC Type:0 Mac:52:54:00:f2:b1:0a Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:enable-default-cni-253147 Clientid:01:52:54:00:f2:b1:0a}
	I0205 03:31:04.674314   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined IP address 192.168.72.143 and MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:04.674469   77491 main.go:141] libmachine: Docker is up and running!
	I0205 03:31:04.674490   77491 main.go:141] libmachine: Reticulating splines...
	I0205 03:31:04.674496   77491 client.go:171] duration metric: took 28.085948821s to LocalClient.Create
	I0205 03:31:04.674515   77491 start.go:167] duration metric: took 28.08601116s to libmachine.API.Create "enable-default-cni-253147"
	I0205 03:31:04.674525   77491 start.go:293] postStartSetup for "enable-default-cni-253147" (driver="kvm2")
	I0205 03:31:04.674534   77491 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0205 03:31:04.674551   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .DriverName
	I0205 03:31:04.674777   77491 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0205 03:31:04.674799   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHHostname
	I0205 03:31:04.677166   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:04.677546   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:b1:0a", ip: ""} in network mk-enable-default-cni-253147: {Iface:virbr4 ExpiryTime:2025-02-05 04:30:52 +0000 UTC Type:0 Mac:52:54:00:f2:b1:0a Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:enable-default-cni-253147 Clientid:01:52:54:00:f2:b1:0a}
	I0205 03:31:04.677583   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined IP address 192.168.72.143 and MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:04.677719   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHPort
	I0205 03:31:04.677934   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHKeyPath
	I0205 03:31:04.678110   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHUsername
	I0205 03:31:04.678319   77491 sshutil.go:53] new ssh client: &{IP:192.168.72.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/enable-default-cni-253147/id_rsa Username:docker}
	I0205 03:31:04.765407   77491 ssh_runner.go:195] Run: cat /etc/os-release
	I0205 03:31:04.769563   77491 info.go:137] Remote host: Buildroot 2023.02.9
	I0205 03:31:04.769590   77491 filesync.go:126] Scanning /home/jenkins/minikube-integration/20363-12788/.minikube/addons for local assets ...
	I0205 03:31:04.769676   77491 filesync.go:126] Scanning /home/jenkins/minikube-integration/20363-12788/.minikube/files for local assets ...
	I0205 03:31:04.769804   77491 filesync.go:149] local asset: /home/jenkins/minikube-integration/20363-12788/.minikube/files/etc/ssl/certs/199892.pem -> 199892.pem in /etc/ssl/certs
	I0205 03:31:04.769960   77491 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0205 03:31:04.780900   77491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/files/etc/ssl/certs/199892.pem --> /etc/ssl/certs/199892.pem (1708 bytes)
	I0205 03:31:04.803771   77491 start.go:296] duration metric: took 129.206676ms for postStartSetup
	I0205 03:31:04.803864   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetConfigRaw
	I0205 03:31:04.804597   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetIP
	I0205 03:31:04.807183   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:04.807475   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:b1:0a", ip: ""} in network mk-enable-default-cni-253147: {Iface:virbr4 ExpiryTime:2025-02-05 04:30:52 +0000 UTC Type:0 Mac:52:54:00:f2:b1:0a Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:enable-default-cni-253147 Clientid:01:52:54:00:f2:b1:0a}
	I0205 03:31:04.807496   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined IP address 192.168.72.143 and MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:04.807782   77491 profile.go:143] Saving config to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/config.json ...
	I0205 03:31:04.808013   77491 start.go:128] duration metric: took 28.241451408s to createHost
	I0205 03:31:04.808036   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHHostname
	I0205 03:31:04.810436   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:04.810787   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:b1:0a", ip: ""} in network mk-enable-default-cni-253147: {Iface:virbr4 ExpiryTime:2025-02-05 04:30:52 +0000 UTC Type:0 Mac:52:54:00:f2:b1:0a Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:enable-default-cni-253147 Clientid:01:52:54:00:f2:b1:0a}
	I0205 03:31:04.810814   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined IP address 192.168.72.143 and MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:04.810919   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHPort
	I0205 03:31:04.811109   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHKeyPath
	I0205 03:31:04.811238   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHKeyPath
	I0205 03:31:04.811355   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHUsername
	I0205 03:31:04.811504   77491 main.go:141] libmachine: Using SSH client type: native
	I0205 03:31:04.811715   77491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 192.168.72.143 22 <nil> <nil>}
	I0205 03:31:04.811732   77491 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0205 03:31:04.926002   77491 main.go:141] libmachine: SSH cmd err, output: <nil>: 1738726264.914833275
	
	I0205 03:31:04.926023   77491 fix.go:216] guest clock: 1738726264.914833275
	I0205 03:31:04.926030   77491 fix.go:229] Guest: 2025-02-05 03:31:04.914833275 +0000 UTC Remote: 2025-02-05 03:31:04.808026342 +0000 UTC m=+51.788410297 (delta=106.806933ms)
	I0205 03:31:04.926064   77491 fix.go:200] guest clock delta is within tolerance: 106.806933ms
	I0205 03:31:04.926069   77491 start.go:83] releasing machines lock for "enable-default-cni-253147", held for 28.359642702s
	I0205 03:31:04.926086   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .DriverName
	I0205 03:31:04.926463   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetIP
	I0205 03:31:04.929123   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:04.929524   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:b1:0a", ip: ""} in network mk-enable-default-cni-253147: {Iface:virbr4 ExpiryTime:2025-02-05 04:30:52 +0000 UTC Type:0 Mac:52:54:00:f2:b1:0a Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:enable-default-cni-253147 Clientid:01:52:54:00:f2:b1:0a}
	I0205 03:31:04.929555   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined IP address 192.168.72.143 and MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:04.929752   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .DriverName
	I0205 03:31:04.930239   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .DriverName
	I0205 03:31:04.930427   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .DriverName
	I0205 03:31:04.930513   77491 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0205 03:31:04.930571   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHHostname
	I0205 03:31:04.930629   77491 ssh_runner.go:195] Run: cat /version.json
	I0205 03:31:04.930659   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHHostname
	I0205 03:31:04.933284   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:04.933605   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:04.933684   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:b1:0a", ip: ""} in network mk-enable-default-cni-253147: {Iface:virbr4 ExpiryTime:2025-02-05 04:30:52 +0000 UTC Type:0 Mac:52:54:00:f2:b1:0a Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:enable-default-cni-253147 Clientid:01:52:54:00:f2:b1:0a}
	I0205 03:31:04.933713   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined IP address 192.168.72.143 and MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:04.933856   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHPort
	I0205 03:31:04.933950   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:b1:0a", ip: ""} in network mk-enable-default-cni-253147: {Iface:virbr4 ExpiryTime:2025-02-05 04:30:52 +0000 UTC Type:0 Mac:52:54:00:f2:b1:0a Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:enable-default-cni-253147 Clientid:01:52:54:00:f2:b1:0a}
	I0205 03:31:04.933981   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined IP address 192.168.72.143 and MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:04.934048   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHKeyPath
	I0205 03:31:04.934118   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHPort
	I0205 03:31:04.934190   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHUsername
	I0205 03:31:04.934261   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHKeyPath
	I0205 03:31:04.934337   77491 sshutil.go:53] new ssh client: &{IP:192.168.72.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/enable-default-cni-253147/id_rsa Username:docker}
	I0205 03:31:04.934363   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHUsername
	I0205 03:31:04.934497   77491 sshutil.go:53] new ssh client: &{IP:192.168.72.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/enable-default-cni-253147/id_rsa Username:docker}
	I0205 03:31:05.038295   77491 ssh_runner.go:195] Run: systemctl --version
	I0205 03:31:05.046555   77491 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0205 03:31:05.202392   77491 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0205 03:31:05.208827   77491 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0205 03:31:05.208909   77491 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0205 03:31:05.224566   77491 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0205 03:31:05.224591   77491 start.go:495] detecting cgroup driver to use...
	I0205 03:31:05.224649   77491 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0205 03:31:05.241921   77491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0205 03:31:05.258520   77491 docker.go:217] disabling cri-docker service (if available) ...
	I0205 03:31:05.258573   77491 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0205 03:31:05.273000   77491 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0205 03:31:05.290764   77491 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0205 03:31:05.415930   77491 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0205 03:31:05.580644   77491 docker.go:233] disabling docker service ...
	I0205 03:31:05.580722   77491 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0205 03:31:05.597829   77491 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0205 03:31:05.612417   77491 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0205 03:31:05.738341   77491 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0205 03:31:05.858172   77491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0205 03:31:05.872230   77491 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0205 03:31:05.890687   77491 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0205 03:31:05.890756   77491 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:31:05.900897   77491 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0205 03:31:05.900964   77491 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:31:05.911101   77491 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:31:05.921279   77491 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:31:05.931261   77491 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0205 03:31:05.941601   77491 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:31:05.951767   77491 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:31:05.968682   77491 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 03:31:05.978994   77491 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0205 03:31:05.988162   77491 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0205 03:31:05.988245   77491 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0205 03:31:06.000942   77491 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0205 03:31:06.011697   77491 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0205 03:31:06.154853   77491 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0205 03:31:06.247780   77491 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0205 03:31:06.247855   77491 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0205 03:31:06.252823   77491 start.go:563] Will wait 60s for crictl version
	I0205 03:31:06.252885   77491 ssh_runner.go:195] Run: which crictl
	I0205 03:31:06.256583   77491 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0205 03:31:06.303233   77491 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0205 03:31:06.303322   77491 ssh_runner.go:195] Run: crio --version
	I0205 03:31:06.333252   77491 ssh_runner.go:195] Run: crio --version
	I0205 03:31:06.367100   77491 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0205 03:31:06.368354   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetIP
	I0205 03:31:06.371200   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:06.371574   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:b1:0a", ip: ""} in network mk-enable-default-cni-253147: {Iface:virbr4 ExpiryTime:2025-02-05 04:30:52 +0000 UTC Type:0 Mac:52:54:00:f2:b1:0a Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:enable-default-cni-253147 Clientid:01:52:54:00:f2:b1:0a}
	I0205 03:31:06.371610   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined IP address 192.168.72.143 and MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:06.371765   77491 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0205 03:31:06.375962   77491 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0205 03:31:06.388425   77491 kubeadm.go:883] updating cluster {Name:enable-default-cni-253147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:enable-default-cni-253147 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.72.143 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0205 03:31:06.388534   77491 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0205 03:31:06.388576   77491 ssh_runner.go:195] Run: sudo crictl images --output json
	I0205 03:31:06.424506   77491 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0205 03:31:06.424581   77491 ssh_runner.go:195] Run: which lz4
	I0205 03:31:06.428194   77491 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0205 03:31:06.432079   77491 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0205 03:31:06.432103   77491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0205 03:31:07.771961   77491 crio.go:462] duration metric: took 1.34378928s to copy over tarball
	I0205 03:31:07.772026   77491 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0205 03:31:10.180443   77491 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.408392093s)
	I0205 03:31:10.180471   77491 crio.go:469] duration metric: took 2.408486001s to extract the tarball
	I0205 03:31:10.180478   77491 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0205 03:31:10.217082   77491 ssh_runner.go:195] Run: sudo crictl images --output json
	I0205 03:31:10.257757   77491 crio.go:514] all images are preloaded for cri-o runtime.
	I0205 03:31:10.257783   77491 cache_images.go:84] Images are preloaded, skipping loading
	I0205 03:31:10.257791   77491 kubeadm.go:934] updating node { 192.168.72.143 8443 v1.32.1 crio true true} ...
	I0205 03:31:10.257900   77491 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=enable-default-cni-253147 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.143
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:enable-default-cni-253147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0205 03:31:10.257986   77491 ssh_runner.go:195] Run: crio config
	I0205 03:31:10.302669   77491 cni.go:84] Creating CNI manager for "bridge"
	I0205 03:31:10.302695   77491 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0205 03:31:10.302715   77491 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.143 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:enable-default-cni-253147 NodeName:enable-default-cni-253147 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.143"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.143 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0205 03:31:10.302854   77491 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.143
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "enable-default-cni-253147"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.143"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.143"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0205 03:31:10.302912   77491 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0205 03:31:10.313180   77491 binaries.go:44] Found k8s binaries, skipping transfer
	I0205 03:31:10.313239   77491 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0205 03:31:10.322875   77491 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0205 03:31:10.341622   77491 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0205 03:31:10.359218   77491 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I0205 03:31:10.375474   77491 ssh_runner.go:195] Run: grep 192.168.72.143	control-plane.minikube.internal$ /etc/hosts
	I0205 03:31:10.379017   77491 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.143	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0205 03:31:10.390336   77491 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0205 03:31:10.515776   77491 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0205 03:31:10.531260   77491 certs.go:68] Setting up /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147 for IP: 192.168.72.143
	I0205 03:31:10.531279   77491 certs.go:194] generating shared ca certs ...
	I0205 03:31:10.531295   77491 certs.go:226] acquiring lock for ca certs: {Name:mkca3cbb562bd1b517883c3b07fce8acc6cc9038 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:31:10.531463   77491 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20363-12788/.minikube/ca.key
	I0205 03:31:10.531521   77491 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20363-12788/.minikube/proxy-client-ca.key
	I0205 03:31:10.531535   77491 certs.go:256] generating profile certs ...
	I0205 03:31:10.531597   77491 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/client.key
	I0205 03:31:10.531615   77491 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/client.crt with IP's: []
	I0205 03:31:10.623511   77491 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/client.crt ...
	I0205 03:31:10.623541   77491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/client.crt: {Name:mkc3265782d36a38d39b00b5a3fdc16129a0a7f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:31:10.623733   77491 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/client.key ...
	I0205 03:31:10.623772   77491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/client.key: {Name:mk1e41e7e69153fbaadbab1473ba194abf87affa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:31:10.623886   77491 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/apiserver.key.6b6e9977
	I0205 03:31:10.623903   77491 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/apiserver.crt.6b6e9977 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.143]
	I0205 03:31:10.685642   77491 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/apiserver.crt.6b6e9977 ...
	I0205 03:31:10.685673   77491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/apiserver.crt.6b6e9977: {Name:mk431170a7432cb10eb1d7e8d1913a32a3b3e772 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:31:10.685835   77491 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/apiserver.key.6b6e9977 ...
	I0205 03:31:10.685850   77491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/apiserver.key.6b6e9977: {Name:mk07b515b8a77874236f65f75d7ed92f1da27679 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:31:10.685926   77491 certs.go:381] copying /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/apiserver.crt.6b6e9977 -> /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/apiserver.crt
	I0205 03:31:10.685996   77491 certs.go:385] copying /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/apiserver.key.6b6e9977 -> /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/apiserver.key
	I0205 03:31:10.686048   77491 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/proxy-client.key
	I0205 03:31:10.686064   77491 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/proxy-client.crt with IP's: []
	I0205 03:31:10.757612   77491 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/proxy-client.crt ...
	I0205 03:31:10.757642   77491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/proxy-client.crt: {Name:mk6314fe40653bc406d9bc8936c93e134713ddb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:31:10.757802   77491 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/proxy-client.key ...
	I0205 03:31:10.757814   77491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/proxy-client.key: {Name:mk87a05e19c8c78ef3191ba32fe40e1269b304c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:31:10.758016   77491 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/19989.pem (1338 bytes)
	W0205 03:31:10.758054   77491 certs.go:480] ignoring /home/jenkins/minikube-integration/20363-12788/.minikube/certs/19989_empty.pem, impossibly tiny 0 bytes
	I0205 03:31:10.758061   77491 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca-key.pem (1679 bytes)
	I0205 03:31:10.758084   77491 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/ca.pem (1082 bytes)
	I0205 03:31:10.758107   77491 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/cert.pem (1123 bytes)
	I0205 03:31:10.758128   77491 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/certs/key.pem (1675 bytes)
	I0205 03:31:10.758199   77491 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12788/.minikube/files/etc/ssl/certs/199892.pem (1708 bytes)
	I0205 03:31:10.758715   77491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0205 03:31:10.785626   77491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0205 03:31:10.814361   77491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0205 03:31:10.839358   77491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0205 03:31:10.863255   77491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0205 03:31:10.886722   77491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0205 03:31:10.910224   77491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0205 03:31:10.934083   77491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/enable-default-cni-253147/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0205 03:31:10.957377   77491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0205 03:31:10.979896   77491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/certs/19989.pem --> /usr/share/ca-certificates/19989.pem (1338 bytes)
	I0205 03:31:11.001812   77491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12788/.minikube/files/etc/ssl/certs/199892.pem --> /usr/share/ca-certificates/199892.pem (1708 bytes)
	I0205 03:31:11.024528   77491 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0205 03:31:11.039807   77491 ssh_runner.go:195] Run: openssl version
	I0205 03:31:11.045445   77491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/199892.pem && ln -fs /usr/share/ca-certificates/199892.pem /etc/ssl/certs/199892.pem"
	I0205 03:31:11.056090   77491 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/199892.pem
	I0205 03:31:11.060457   77491 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb  5 02:11 /usr/share/ca-certificates/199892.pem
	I0205 03:31:11.060511   77491 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/199892.pem
	I0205 03:31:11.066216   77491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/199892.pem /etc/ssl/certs/3ec20f2e.0"
	I0205 03:31:11.076504   77491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0205 03:31:11.086582   77491 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0205 03:31:11.090749   77491 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb  5 02:04 /usr/share/ca-certificates/minikubeCA.pem
	I0205 03:31:11.090797   77491 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0205 03:31:11.096301   77491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0205 03:31:11.106709   77491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19989.pem && ln -fs /usr/share/ca-certificates/19989.pem /etc/ssl/certs/19989.pem"
	I0205 03:31:11.116904   77491 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19989.pem
	I0205 03:31:11.121120   77491 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb  5 02:11 /usr/share/ca-certificates/19989.pem
	I0205 03:31:11.121168   77491 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19989.pem
	I0205 03:31:11.126651   77491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/19989.pem /etc/ssl/certs/51391683.0"
	I0205 03:31:11.137004   77491 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0205 03:31:11.140782   77491 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0205 03:31:11.140834   77491 kubeadm.go:392] StartCluster: {Name:enable-default-cni-253147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:enable-default-cni-253147 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.72.143 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0205 03:31:11.140918   77491 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0205 03:31:11.140985   77491 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0205 03:31:11.182034   77491 cri.go:89] found id: ""
	I0205 03:31:11.182097   77491 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0205 03:31:11.196119   77491 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0205 03:31:11.208693   77491 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0205 03:31:11.221618   77491 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0205 03:31:11.221642   77491 kubeadm.go:157] found existing configuration files:
	
	I0205 03:31:11.221695   77491 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0205 03:31:11.232681   77491 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0205 03:31:11.232757   77491 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0205 03:31:11.244988   77491 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0205 03:31:11.255084   77491 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0205 03:31:11.255154   77491 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0205 03:31:11.265175   77491 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0205 03:31:11.274139   77491 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0205 03:31:11.274209   77491 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0205 03:31:11.284042   77491 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0205 03:31:11.293097   77491 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0205 03:31:11.293171   77491 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0205 03:31:11.302601   77491 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0205 03:31:11.467478   77491 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0205 03:31:21.702323   77491 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0205 03:31:21.702417   77491 kubeadm.go:310] [preflight] Running pre-flight checks
	I0205 03:31:21.702511   77491 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0205 03:31:21.702610   77491 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0205 03:31:21.702720   77491 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0205 03:31:21.702787   77491 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0205 03:31:21.704300   77491 out.go:235]   - Generating certificates and keys ...
	I0205 03:31:21.704401   77491 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0205 03:31:21.704496   77491 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0205 03:31:21.704598   77491 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0205 03:31:21.704688   77491 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0205 03:31:21.704758   77491 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0205 03:31:21.704823   77491 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0205 03:31:21.704911   77491 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0205 03:31:21.705067   77491 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [enable-default-cni-253147 localhost] and IPs [192.168.72.143 127.0.0.1 ::1]
	I0205 03:31:21.705151   77491 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0205 03:31:21.705297   77491 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [enable-default-cni-253147 localhost] and IPs [192.168.72.143 127.0.0.1 ::1]
	I0205 03:31:21.705413   77491 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0205 03:31:21.705492   77491 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0205 03:31:21.705572   77491 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0205 03:31:21.705633   77491 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0205 03:31:21.705706   77491 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0205 03:31:21.705817   77491 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0205 03:31:21.705868   77491 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0205 03:31:21.705925   77491 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0205 03:31:21.705980   77491 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0205 03:31:21.706055   77491 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0205 03:31:21.706109   77491 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0205 03:31:21.707276   77491 out.go:235]   - Booting up control plane ...
	I0205 03:31:21.707358   77491 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0205 03:31:21.707425   77491 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0205 03:31:21.707498   77491 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0205 03:31:21.707603   77491 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0205 03:31:21.707692   77491 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0205 03:31:21.707727   77491 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0205 03:31:21.707833   77491 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0205 03:31:21.707923   77491 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0205 03:31:21.707972   77491 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.000944148s
	I0205 03:31:21.708055   77491 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0205 03:31:21.708123   77491 kubeadm.go:310] [api-check] The API server is healthy after 4.501976244s
	I0205 03:31:21.708226   77491 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0205 03:31:21.708331   77491 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0205 03:31:21.708379   77491 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0205 03:31:21.708542   77491 kubeadm.go:310] [mark-control-plane] Marking the node enable-default-cni-253147 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0205 03:31:21.708590   77491 kubeadm.go:310] [bootstrap-token] Using token: i9tybv.ko2zd8utm1qdci6y
	I0205 03:31:21.710478   77491 out.go:235]   - Configuring RBAC rules ...
	I0205 03:31:21.710598   77491 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0205 03:31:21.710675   77491 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0205 03:31:21.710812   77491 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0205 03:31:21.710919   77491 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0205 03:31:21.711048   77491 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0205 03:31:21.711122   77491 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0205 03:31:21.711219   77491 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0205 03:31:21.711255   77491 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0205 03:31:21.711295   77491 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0205 03:31:21.711302   77491 kubeadm.go:310] 
	I0205 03:31:21.711377   77491 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0205 03:31:21.711409   77491 kubeadm.go:310] 
	I0205 03:31:21.711493   77491 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0205 03:31:21.711500   77491 kubeadm.go:310] 
	I0205 03:31:21.711521   77491 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0205 03:31:21.711571   77491 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0205 03:31:21.711620   77491 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0205 03:31:21.711625   77491 kubeadm.go:310] 
	I0205 03:31:21.711669   77491 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0205 03:31:21.711675   77491 kubeadm.go:310] 
	I0205 03:31:21.711719   77491 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0205 03:31:21.711726   77491 kubeadm.go:310] 
	I0205 03:31:21.711768   77491 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0205 03:31:21.711835   77491 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0205 03:31:21.711892   77491 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0205 03:31:21.711898   77491 kubeadm.go:310] 
	I0205 03:31:21.711997   77491 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0205 03:31:21.712098   77491 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0205 03:31:21.712112   77491 kubeadm.go:310] 
	I0205 03:31:21.712230   77491 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token i9tybv.ko2zd8utm1qdci6y \
	I0205 03:31:21.712351   77491 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d4171302552bdc887375c8f8e3004ed013421bed3bb373f5cbe18dbecc68169 \
	I0205 03:31:21.712382   77491 kubeadm.go:310] 	--control-plane 
	I0205 03:31:21.712388   77491 kubeadm.go:310] 
	I0205 03:31:21.712509   77491 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0205 03:31:21.712520   77491 kubeadm.go:310] 
	I0205 03:31:21.712613   77491 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token i9tybv.ko2zd8utm1qdci6y \
	I0205 03:31:21.712734   77491 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d4171302552bdc887375c8f8e3004ed013421bed3bb373f5cbe18dbecc68169 
	I0205 03:31:21.712747   77491 cni.go:84] Creating CNI manager for "bridge"
	I0205 03:31:21.714829   77491 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0205 03:31:21.715847   77491 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0205 03:31:21.728548   77491 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0205 03:31:21.747916   77491 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0205 03:31:21.747973   77491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 03:31:21.747982   77491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes enable-default-cni-253147 minikube.k8s.io/updated_at=2025_02_05T03_31_21_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=e14c106b6cd223b80b8f7425af26284a33f75c3d minikube.k8s.io/name=enable-default-cni-253147 minikube.k8s.io/primary=true
	I0205 03:31:21.898784   77491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 03:31:21.898791   77491 ops.go:34] apiserver oom_adj: -16
	I0205 03:31:22.398898   77491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 03:31:22.899591   77491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 03:31:23.399781   77491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 03:31:23.899353   77491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 03:31:24.399184   77491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 03:31:24.899785   77491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 03:31:25.399192   77491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 03:31:25.489351   77491 kubeadm.go:1113] duration metric: took 3.741429787s to wait for elevateKubeSystemPrivileges
	I0205 03:31:25.489405   77491 kubeadm.go:394] duration metric: took 14.348568211s to StartCluster
	I0205 03:31:25.489435   77491 settings.go:142] acquiring lock: {Name:mk2eca847da5ba78f5b041a83e5cfcbdebb0c621 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:31:25.489532   77491 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20363-12788/kubeconfig
	I0205 03:31:25.490672   77491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/kubeconfig: {Name:mkb405c4292c681fd728af1af9684132d7e45754 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 03:31:25.490945   77491 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0205 03:31:25.490965   77491 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0205 03:31:25.490942   77491 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.143 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0205 03:31:25.491054   77491 addons.go:69] Setting storage-provisioner=true in profile "enable-default-cni-253147"
	I0205 03:31:25.491070   77491 addons.go:69] Setting default-storageclass=true in profile "enable-default-cni-253147"
	I0205 03:31:25.491098   77491 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "enable-default-cni-253147"
	I0205 03:31:25.491172   77491 config.go:182] Loaded profile config "enable-default-cni-253147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 03:31:25.491076   77491 addons.go:238] Setting addon storage-provisioner=true in "enable-default-cni-253147"
	I0205 03:31:25.491240   77491 host.go:66] Checking if "enable-default-cni-253147" exists ...
	I0205 03:31:25.491554   77491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:31:25.491580   77491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:31:25.491631   77491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:31:25.491695   77491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:31:25.493470   77491 out.go:177] * Verifying Kubernetes components...
	I0205 03:31:25.494653   77491 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0205 03:31:25.507196   77491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37971
	I0205 03:31:25.507733   77491 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:31:25.508225   77491 main.go:141] libmachine: Using API Version  1
	I0205 03:31:25.508251   77491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:31:25.508633   77491 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:31:25.508848   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetState
	I0205 03:31:25.511915   77491 addons.go:238] Setting addon default-storageclass=true in "enable-default-cni-253147"
	I0205 03:31:25.511958   77491 host.go:66] Checking if "enable-default-cni-253147" exists ...
	I0205 03:31:25.512271   77491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:31:25.512312   77491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:31:25.512520   77491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44659
	I0205 03:31:25.513046   77491 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:31:25.513629   77491 main.go:141] libmachine: Using API Version  1
	I0205 03:31:25.513651   77491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:31:25.514023   77491 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:31:25.514537   77491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:31:25.514584   77491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:31:25.528623   77491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39521
	I0205 03:31:25.529227   77491 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:31:25.529855   77491 main.go:141] libmachine: Using API Version  1
	I0205 03:31:25.529881   77491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:31:25.530277   77491 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:31:25.530867   77491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40795
	I0205 03:31:25.530979   77491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 03:31:25.531034   77491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 03:31:25.531264   77491 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:31:25.531698   77491 main.go:141] libmachine: Using API Version  1
	I0205 03:31:25.531717   77491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:31:25.532127   77491 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:31:25.532337   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetState
	I0205 03:31:25.534421   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .DriverName
	I0205 03:31:25.536616   77491 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0205 03:31:25.537918   77491 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0205 03:31:25.537937   77491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0205 03:31:25.537954   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHHostname
	I0205 03:31:25.541749   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:25.542282   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:b1:0a", ip: ""} in network mk-enable-default-cni-253147: {Iface:virbr4 ExpiryTime:2025-02-05 04:30:52 +0000 UTC Type:0 Mac:52:54:00:f2:b1:0a Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:enable-default-cni-253147 Clientid:01:52:54:00:f2:b1:0a}
	I0205 03:31:25.542311   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined IP address 192.168.72.143 and MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:25.542509   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHPort
	I0205 03:31:25.542722   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHKeyPath
	I0205 03:31:25.542909   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHUsername
	I0205 03:31:25.543162   77491 sshutil.go:53] new ssh client: &{IP:192.168.72.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/enable-default-cni-253147/id_rsa Username:docker}
	I0205 03:31:25.548839   77491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46559
	I0205 03:31:25.549236   77491 main.go:141] libmachine: () Calling .GetVersion
	I0205 03:31:25.549759   77491 main.go:141] libmachine: Using API Version  1
	I0205 03:31:25.549789   77491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 03:31:25.550077   77491 main.go:141] libmachine: () Calling .GetMachineName
	I0205 03:31:25.550325   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetState
	I0205 03:31:25.551945   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .DriverName
	I0205 03:31:25.552161   77491 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0205 03:31:25.552177   77491 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0205 03:31:25.552196   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHHostname
	I0205 03:31:25.555135   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:25.555623   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:b1:0a", ip: ""} in network mk-enable-default-cni-253147: {Iface:virbr4 ExpiryTime:2025-02-05 04:30:52 +0000 UTC Type:0 Mac:52:54:00:f2:b1:0a Iaid: IPaddr:192.168.72.143 Prefix:24 Hostname:enable-default-cni-253147 Clientid:01:52:54:00:f2:b1:0a}
	I0205 03:31:25.555654   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | domain enable-default-cni-253147 has defined IP address 192.168.72.143 and MAC address 52:54:00:f2:b1:0a in network mk-enable-default-cni-253147
	I0205 03:31:25.555813   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHPort
	I0205 03:31:25.556023   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHKeyPath
	I0205 03:31:25.556200   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .GetSSHUsername
	I0205 03:31:25.556355   77491 sshutil.go:53] new ssh client: &{IP:192.168.72.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/enable-default-cni-253147/id_rsa Username:docker}
	I0205 03:31:25.658549   77491 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0205 03:31:25.658648   77491 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0205 03:31:25.794973   77491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0205 03:31:25.815347   77491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0205 03:31:26.247996   77491 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0205 03:31:26.249058   77491 node_ready.go:35] waiting up to 15m0s for node "enable-default-cni-253147" to be "Ready" ...
	I0205 03:31:26.259706   77491 node_ready.go:49] node "enable-default-cni-253147" has status "Ready":"True"
	I0205 03:31:26.259731   77491 node_ready.go:38] duration metric: took 10.646859ms for node "enable-default-cni-253147" to be "Ready" ...
	I0205 03:31:26.259743   77491 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0205 03:31:26.263978   77491 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-8nj85" in "kube-system" namespace to be "Ready" ...
	I0205 03:31:26.568098   77491 main.go:141] libmachine: Making call to close driver server
	I0205 03:31:26.568137   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .Close
	I0205 03:31:26.568142   77491 main.go:141] libmachine: Making call to close driver server
	I0205 03:31:26.568165   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .Close
	I0205 03:31:26.568478   77491 main.go:141] libmachine: Successfully made call to close driver server
	I0205 03:31:26.568508   77491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 03:31:26.568517   77491 main.go:141] libmachine: Making call to close driver server
	I0205 03:31:26.568508   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | Closing plugin on server side
	I0205 03:31:26.568481   77491 main.go:141] libmachine: Successfully made call to close driver server
	I0205 03:31:26.568578   77491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 03:31:26.568588   77491 main.go:141] libmachine: Making call to close driver server
	I0205 03:31:26.568601   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .Close
	I0205 03:31:26.568530   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .Close
	I0205 03:31:26.568917   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | Closing plugin on server side
	I0205 03:31:26.568925   77491 main.go:141] libmachine: Successfully made call to close driver server
	I0205 03:31:26.568984   77491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 03:31:26.568985   77491 main.go:141] libmachine: Successfully made call to close driver server
	I0205 03:31:26.569194   77491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 03:31:26.568944   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | Closing plugin on server side
	I0205 03:31:26.585395   77491 main.go:141] libmachine: Making call to close driver server
	I0205 03:31:26.585423   77491 main.go:141] libmachine: (enable-default-cni-253147) Calling .Close
	I0205 03:31:26.585742   77491 main.go:141] libmachine: Successfully made call to close driver server
	I0205 03:31:26.585761   77491 main.go:141] libmachine: Making call to close connection to plugin binary
	I0205 03:31:26.585769   77491 main.go:141] libmachine: (enable-default-cni-253147) DBG | Closing plugin on server side
	I0205 03:31:26.587945   77491 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0205 03:31:26.589121   77491 addons.go:514] duration metric: took 1.098153606s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0205 03:31:26.753030   77491 kapi.go:214] "coredns" deployment in "kube-system" namespace and "enable-default-cni-253147" context rescaled to 1 replicas
	I0205 03:31:28.269500   77491 pod_ready.go:103] pod "coredns-668d6bf9bc-8nj85" in "kube-system" namespace has status "Ready":"False"
	I0205 03:31:30.275505   77491 pod_ready.go:103] pod "coredns-668d6bf9bc-8nj85" in "kube-system" namespace has status "Ready":"False"
	I0205 03:31:32.770529   77491 pod_ready.go:103] pod "coredns-668d6bf9bc-8nj85" in "kube-system" namespace has status "Ready":"False"
	I0205 03:31:35.270080   77491 pod_ready.go:103] pod "coredns-668d6bf9bc-8nj85" in "kube-system" namespace has status "Ready":"False"
	I0205 03:31:37.769811   77491 pod_ready.go:103] pod "coredns-668d6bf9bc-8nj85" in "kube-system" namespace has status "Ready":"False"
	I0205 03:31:40.270331   77491 pod_ready.go:103] pod "coredns-668d6bf9bc-8nj85" in "kube-system" namespace has status "Ready":"False"
	I0205 03:31:42.769745   77491 pod_ready.go:103] pod "coredns-668d6bf9bc-8nj85" in "kube-system" namespace has status "Ready":"False"
	I0205 03:31:44.770281   77491 pod_ready.go:103] pod "coredns-668d6bf9bc-8nj85" in "kube-system" namespace has status "Ready":"False"
	I0205 03:31:47.268926   77491 pod_ready.go:103] pod "coredns-668d6bf9bc-8nj85" in "kube-system" namespace has status "Ready":"False"
	I0205 03:31:49.270586   77491 pod_ready.go:103] pod "coredns-668d6bf9bc-8nj85" in "kube-system" namespace has status "Ready":"False"
	I0205 03:31:51.770521   77491 pod_ready.go:103] pod "coredns-668d6bf9bc-8nj85" in "kube-system" namespace has status "Ready":"False"
	I0205 03:31:54.269825   77491 pod_ready.go:103] pod "coredns-668d6bf9bc-8nj85" in "kube-system" namespace has status "Ready":"False"
	I0205 03:31:56.769127   77491 pod_ready.go:103] pod "coredns-668d6bf9bc-8nj85" in "kube-system" namespace has status "Ready":"False"
	I0205 03:31:58.771048   77491 pod_ready.go:103] pod "coredns-668d6bf9bc-8nj85" in "kube-system" namespace has status "Ready":"False"
	I0205 03:32:01.270244   77491 pod_ready.go:103] pod "coredns-668d6bf9bc-8nj85" in "kube-system" namespace has status "Ready":"False"
	I0205 03:32:03.769464   77491 pod_ready.go:93] pod "coredns-668d6bf9bc-8nj85" in "kube-system" namespace has status "Ready":"True"
	I0205 03:32:03.769488   77491 pod_ready.go:82] duration metric: took 37.505471392s for pod "coredns-668d6bf9bc-8nj85" in "kube-system" namespace to be "Ready" ...
	I0205 03:32:03.769500   77491 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-j5vpn" in "kube-system" namespace to be "Ready" ...
	I0205 03:32:03.771193   77491 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-j5vpn" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-j5vpn" not found
	I0205 03:32:03.771213   77491 pod_ready.go:82] duration metric: took 1.707852ms for pod "coredns-668d6bf9bc-j5vpn" in "kube-system" namespace to be "Ready" ...
	E0205 03:32:03.771222   77491 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-j5vpn" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-j5vpn" not found
	I0205 03:32:03.771230   77491 pod_ready.go:79] waiting up to 15m0s for pod "etcd-enable-default-cni-253147" in "kube-system" namespace to be "Ready" ...
	I0205 03:32:03.775304   77491 pod_ready.go:93] pod "etcd-enable-default-cni-253147" in "kube-system" namespace has status "Ready":"True"
	I0205 03:32:03.775324   77491 pod_ready.go:82] duration metric: took 4.087134ms for pod "etcd-enable-default-cni-253147" in "kube-system" namespace to be "Ready" ...
	I0205 03:32:03.775335   77491 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-enable-default-cni-253147" in "kube-system" namespace to be "Ready" ...
	I0205 03:32:03.778837   77491 pod_ready.go:93] pod "kube-apiserver-enable-default-cni-253147" in "kube-system" namespace has status "Ready":"True"
	I0205 03:32:03.778853   77491 pod_ready.go:82] duration metric: took 3.511148ms for pod "kube-apiserver-enable-default-cni-253147" in "kube-system" namespace to be "Ready" ...
	I0205 03:32:03.778864   77491 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-enable-default-cni-253147" in "kube-system" namespace to be "Ready" ...
	I0205 03:32:03.782399   77491 pod_ready.go:93] pod "kube-controller-manager-enable-default-cni-253147" in "kube-system" namespace has status "Ready":"True"
	I0205 03:32:03.782416   77491 pod_ready.go:82] duration metric: took 3.544891ms for pod "kube-controller-manager-enable-default-cni-253147" in "kube-system" namespace to be "Ready" ...
	I0205 03:32:03.782425   77491 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-56g74" in "kube-system" namespace to be "Ready" ...
	I0205 03:32:03.967978   77491 pod_ready.go:93] pod "kube-proxy-56g74" in "kube-system" namespace has status "Ready":"True"
	I0205 03:32:03.968003   77491 pod_ready.go:82] duration metric: took 185.571014ms for pod "kube-proxy-56g74" in "kube-system" namespace to be "Ready" ...
	I0205 03:32:03.968013   77491 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-enable-default-cni-253147" in "kube-system" namespace to be "Ready" ...
	I0205 03:32:04.368643   77491 pod_ready.go:93] pod "kube-scheduler-enable-default-cni-253147" in "kube-system" namespace has status "Ready":"True"
	I0205 03:32:04.368669   77491 pod_ready.go:82] duration metric: took 400.649646ms for pod "kube-scheduler-enable-default-cni-253147" in "kube-system" namespace to be "Ready" ...
	I0205 03:32:04.368679   77491 pod_ready.go:39] duration metric: took 38.10892276s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0205 03:32:04.368698   77491 api_server.go:52] waiting for apiserver process to appear ...
	I0205 03:32:04.368762   77491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 03:32:04.384644   77491 api_server.go:72] duration metric: took 38.893584005s to wait for apiserver process to appear ...
	I0205 03:32:04.384671   77491 api_server.go:88] waiting for apiserver healthz status ...
	I0205 03:32:04.384688   77491 api_server.go:253] Checking apiserver healthz at https://192.168.72.143:8443/healthz ...
	I0205 03:32:04.389020   77491 api_server.go:279] https://192.168.72.143:8443/healthz returned 200:
	ok
	I0205 03:32:04.389953   77491 api_server.go:141] control plane version: v1.32.1
	I0205 03:32:04.389976   77491 api_server.go:131] duration metric: took 5.299568ms to wait for apiserver health ...
	I0205 03:32:04.389984   77491 system_pods.go:43] waiting for kube-system pods to appear ...
	I0205 03:32:04.569218   77491 system_pods.go:59] 7 kube-system pods found
	I0205 03:32:04.569254   77491 system_pods.go:61] "coredns-668d6bf9bc-8nj85" [35e45a5a-0f36-4b67-9c91-ba4b0436f156] Running
	I0205 03:32:04.569262   77491 system_pods.go:61] "etcd-enable-default-cni-253147" [b2e06ccf-3909-417f-9ed0-ce47bd790bde] Running
	I0205 03:32:04.569267   77491 system_pods.go:61] "kube-apiserver-enable-default-cni-253147" [0463ffc5-a722-4df5-9885-1db3f7c8e89f] Running
	I0205 03:32:04.569274   77491 system_pods.go:61] "kube-controller-manager-enable-default-cni-253147" [38a85a6f-c6a1-473b-bd05-2d64be2f8c52] Running
	I0205 03:32:04.569279   77491 system_pods.go:61] "kube-proxy-56g74" [fa42b842-56ce-4965-9822-f28a774ab641] Running
	I0205 03:32:04.569284   77491 system_pods.go:61] "kube-scheduler-enable-default-cni-253147" [2fccc41d-e339-4ea1-a296-be7befb819fb] Running
	I0205 03:32:04.569289   77491 system_pods.go:61] "storage-provisioner" [45a5c96f-ab44-4fc3-81c0-b4f8208b1973] Running
	I0205 03:32:04.569296   77491 system_pods.go:74] duration metric: took 179.306382ms to wait for pod list to return data ...
	I0205 03:32:04.569304   77491 default_sa.go:34] waiting for default service account to be created ...
	I0205 03:32:04.768437   77491 default_sa.go:45] found service account: "default"
	I0205 03:32:04.768463   77491 default_sa.go:55] duration metric: took 199.153251ms for default service account to be created ...
	I0205 03:32:04.768472   77491 system_pods.go:116] waiting for k8s-apps to be running ...
	I0205 03:32:04.968947   77491 system_pods.go:86] 7 kube-system pods found
	I0205 03:32:04.968976   77491 system_pods.go:89] "coredns-668d6bf9bc-8nj85" [35e45a5a-0f36-4b67-9c91-ba4b0436f156] Running
	I0205 03:32:04.968982   77491 system_pods.go:89] "etcd-enable-default-cni-253147" [b2e06ccf-3909-417f-9ed0-ce47bd790bde] Running
	I0205 03:32:04.968986   77491 system_pods.go:89] "kube-apiserver-enable-default-cni-253147" [0463ffc5-a722-4df5-9885-1db3f7c8e89f] Running
	I0205 03:32:04.968990   77491 system_pods.go:89] "kube-controller-manager-enable-default-cni-253147" [38a85a6f-c6a1-473b-bd05-2d64be2f8c52] Running
	I0205 03:32:04.968994   77491 system_pods.go:89] "kube-proxy-56g74" [fa42b842-56ce-4965-9822-f28a774ab641] Running
	I0205 03:32:04.968997   77491 system_pods.go:89] "kube-scheduler-enable-default-cni-253147" [2fccc41d-e339-4ea1-a296-be7befb819fb] Running
	I0205 03:32:04.969001   77491 system_pods.go:89] "storage-provisioner" [45a5c96f-ab44-4fc3-81c0-b4f8208b1973] Running
	I0205 03:32:04.969010   77491 system_pods.go:126] duration metric: took 200.530558ms to wait for k8s-apps to be running ...
	I0205 03:32:04.969017   77491 system_svc.go:44] waiting for kubelet service to be running ....
	I0205 03:32:04.969060   77491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0205 03:32:04.983667   77491 system_svc.go:56] duration metric: took 14.639753ms WaitForService to wait for kubelet
	I0205 03:32:04.983693   77491 kubeadm.go:582] duration metric: took 39.492639559s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0205 03:32:04.983710   77491 node_conditions.go:102] verifying NodePressure condition ...
	I0205 03:32:05.168037   77491 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0205 03:32:05.168063   77491 node_conditions.go:123] node cpu capacity is 2
	I0205 03:32:05.168077   77491 node_conditions.go:105] duration metric: took 184.363284ms to run NodePressure ...
	I0205 03:32:05.168088   77491 start.go:241] waiting for startup goroutines ...
	I0205 03:32:05.168094   77491 start.go:246] waiting for cluster config update ...
	I0205 03:32:05.168103   77491 start.go:255] writing updated cluster config ...
	I0205 03:32:05.168381   77491 ssh_runner.go:195] Run: rm -f paused
	I0205 03:32:05.216322   77491 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0205 03:32:05.218842   77491 out.go:177] * Done! kubectl is now configured to use "enable-default-cni-253147" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 05 03:41:41 old-k8s-version-191773 crio[628]: time="2025-02-05 03:41:41.420891599Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738726901420858999,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ba95f9c4-f24d-43a0-b1ba-faa63d0b17a8 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 03:41:41 old-k8s-version-191773 crio[628]: time="2025-02-05 03:41:41.421438910Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=97c6e623-73bc-4d3b-a910-bb994c959c19 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:41:41 old-k8s-version-191773 crio[628]: time="2025-02-05 03:41:41.421505720Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=97c6e623-73bc-4d3b-a910-bb994c959c19 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:41:41 old-k8s-version-191773 crio[628]: time="2025-02-05 03:41:41.421542547Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=97c6e623-73bc-4d3b-a910-bb994c959c19 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:41:41 old-k8s-version-191773 crio[628]: time="2025-02-05 03:41:41.451869696Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3797dfbb-ade4-4154-a0de-cd777c3614e4 name=/runtime.v1.RuntimeService/Version
	Feb 05 03:41:41 old-k8s-version-191773 crio[628]: time="2025-02-05 03:41:41.451955907Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3797dfbb-ade4-4154-a0de-cd777c3614e4 name=/runtime.v1.RuntimeService/Version
	Feb 05 03:41:41 old-k8s-version-191773 crio[628]: time="2025-02-05 03:41:41.452900437Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f3ac5096-7e9d-4e6e-8fb5-8329557f7405 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 03:41:41 old-k8s-version-191773 crio[628]: time="2025-02-05 03:41:41.453288526Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738726901453260565,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f3ac5096-7e9d-4e6e-8fb5-8329557f7405 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 03:41:41 old-k8s-version-191773 crio[628]: time="2025-02-05 03:41:41.453801752Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=20f4c342-26c4-46a3-8081-e7350e39a9c5 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:41:41 old-k8s-version-191773 crio[628]: time="2025-02-05 03:41:41.453861591Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=20f4c342-26c4-46a3-8081-e7350e39a9c5 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:41:41 old-k8s-version-191773 crio[628]: time="2025-02-05 03:41:41.453899326Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=20f4c342-26c4-46a3-8081-e7350e39a9c5 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:41:41 old-k8s-version-191773 crio[628]: time="2025-02-05 03:41:41.483804478Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7cd2561e-826c-47c2-914e-0c6b5576beaf name=/runtime.v1.RuntimeService/Version
	Feb 05 03:41:41 old-k8s-version-191773 crio[628]: time="2025-02-05 03:41:41.483896406Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7cd2561e-826c-47c2-914e-0c6b5576beaf name=/runtime.v1.RuntimeService/Version
	Feb 05 03:41:41 old-k8s-version-191773 crio[628]: time="2025-02-05 03:41:41.484958896Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=672cec13-2f9b-4ffe-9841-a22eb5f57595 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 03:41:41 old-k8s-version-191773 crio[628]: time="2025-02-05 03:41:41.485334599Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738726901485312467,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=672cec13-2f9b-4ffe-9841-a22eb5f57595 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 03:41:41 old-k8s-version-191773 crio[628]: time="2025-02-05 03:41:41.485887939Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=06c17eff-1861-4dbe-8956-978e000c2e35 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:41:41 old-k8s-version-191773 crio[628]: time="2025-02-05 03:41:41.485935196Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=06c17eff-1861-4dbe-8956-978e000c2e35 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:41:41 old-k8s-version-191773 crio[628]: time="2025-02-05 03:41:41.485972352Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=06c17eff-1861-4dbe-8956-978e000c2e35 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:41:41 old-k8s-version-191773 crio[628]: time="2025-02-05 03:41:41.516052069Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1def9ab8-47e3-403e-b63e-b94d3974aa6e name=/runtime.v1.RuntimeService/Version
	Feb 05 03:41:41 old-k8s-version-191773 crio[628]: time="2025-02-05 03:41:41.516133527Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1def9ab8-47e3-403e-b63e-b94d3974aa6e name=/runtime.v1.RuntimeService/Version
	Feb 05 03:41:41 old-k8s-version-191773 crio[628]: time="2025-02-05 03:41:41.517549241Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ee3f5ca6-1dd2-43dd-bfe2-c509db6a7e77 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 03:41:41 old-k8s-version-191773 crio[628]: time="2025-02-05 03:41:41.518013876Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738726901517985341,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ee3f5ca6-1dd2-43dd-bfe2-c509db6a7e77 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 05 03:41:41 old-k8s-version-191773 crio[628]: time="2025-02-05 03:41:41.518816313Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=862415de-caf1-4994-94c6-730e8623ee85 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:41:41 old-k8s-version-191773 crio[628]: time="2025-02-05 03:41:41.518882476Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=862415de-caf1-4994-94c6-730e8623ee85 name=/runtime.v1.RuntimeService/ListContainers
	Feb 05 03:41:41 old-k8s-version-191773 crio[628]: time="2025-02-05 03:41:41.518915232Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=862415de-caf1-4994-94c6-730e8623ee85 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Feb 5 03:18] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053905] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040287] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.006200] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.096613] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.500084] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.640447] systemd-fstab-generator[555]: Ignoring "noauto" option for root device
	[  +0.062173] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064592] systemd-fstab-generator[567]: Ignoring "noauto" option for root device
	[  +0.179648] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.107931] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.224718] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +6.148854] systemd-fstab-generator[875]: Ignoring "noauto" option for root device
	[  +0.061424] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.976913] systemd-fstab-generator[1000]: Ignoring "noauto" option for root device
	[ +13.604699] kauditd_printk_skb: 46 callbacks suppressed
	[Feb 5 03:22] systemd-fstab-generator[5042]: Ignoring "noauto" option for root device
	[Feb 5 03:24] systemd-fstab-generator[5320]: Ignoring "noauto" option for root device
	[  +0.067795] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 03:41:41 up 23 min,  0 users,  load average: 0.04, 0.03, 0.02
	Linux old-k8s-version-191773 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Feb 05 03:41:39 old-k8s-version-191773 kubelet[7162]: net.(*Dialer).DialContext(0xc00028a3c0, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000c8a060, 0x24, 0x0, 0x0, 0x0, ...)
	Feb 05 03:41:39 old-k8s-version-191773 kubelet[7162]:         /usr/local/go/src/net/dial.go:425 +0x6e5
	Feb 05 03:41:39 old-k8s-version-191773 kubelet[7162]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc0008edbe0, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000c8a060, 0x24, 0x1000000000060, 0x7f5974266f48, 0x118, ...)
	Feb 05 03:41:39 old-k8s-version-191773 kubelet[7162]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Feb 05 03:41:39 old-k8s-version-191773 kubelet[7162]: net/http.(*Transport).dial(0xc0008b8000, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000c8a060, 0x24, 0x0, 0x0, 0x0, ...)
	Feb 05 03:41:39 old-k8s-version-191773 kubelet[7162]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Feb 05 03:41:39 old-k8s-version-191773 kubelet[7162]: net/http.(*Transport).dialConn(0xc0008b8000, 0x4f7fe00, 0xc000120018, 0x0, 0xc000a3e9c0, 0x5, 0xc000c8a060, 0x24, 0x0, 0xc000c8e000, ...)
	Feb 05 03:41:39 old-k8s-version-191773 kubelet[7162]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Feb 05 03:41:39 old-k8s-version-191773 kubelet[7162]: net/http.(*Transport).dialConnFor(0xc0008b8000, 0xc000974580)
	Feb 05 03:41:39 old-k8s-version-191773 kubelet[7162]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Feb 05 03:41:39 old-k8s-version-191773 kubelet[7162]: created by net/http.(*Transport).queueForDial
	Feb 05 03:41:39 old-k8s-version-191773 kubelet[7162]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Feb 05 03:41:39 old-k8s-version-191773 kubelet[7162]: goroutine 174 [select]:
	Feb 05 03:41:39 old-k8s-version-191773 kubelet[7162]: net.(*netFD).connect.func2(0x4f7fe40, 0xc000a800c0, 0xc000932800, 0xc000a3ec60, 0xc000a3ec00)
	Feb 05 03:41:39 old-k8s-version-191773 kubelet[7162]:         /usr/local/go/src/net/fd_unix.go:118 +0xc5
	Feb 05 03:41:39 old-k8s-version-191773 kubelet[7162]: created by net.(*netFD).connect
	Feb 05 03:41:39 old-k8s-version-191773 kubelet[7162]:         /usr/local/go/src/net/fd_unix.go:117 +0x234
	Feb 05 03:41:40 old-k8s-version-191773 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 178.
	Feb 05 03:41:40 old-k8s-version-191773 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 05 03:41:40 old-k8s-version-191773 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 05 03:41:40 old-k8s-version-191773 kubelet[7172]: I0205 03:41:40.508961    7172 server.go:416] Version: v1.20.0
	Feb 05 03:41:40 old-k8s-version-191773 kubelet[7172]: I0205 03:41:40.509187    7172 server.go:837] Client rotation is on, will bootstrap in background
	Feb 05 03:41:40 old-k8s-version-191773 kubelet[7172]: I0205 03:41:40.510952    7172 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 05 03:41:40 old-k8s-version-191773 kubelet[7172]: W0205 03:41:40.511692    7172 manager.go:159] Cannot detect current cgroup on cgroup v2
	Feb 05 03:41:40 old-k8s-version-191773 kubelet[7172]: I0205 03:41:40.511915    7172 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-191773 -n old-k8s-version-191773
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-191773 -n old-k8s-version-191773: exit status 2 (221.77772ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-191773" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (367.99s)

                                                
                                    

Test pass (270/321)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.63
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.32.1/json-events 5.46
13 TestDownloadOnly/v1.32.1/preload-exists 0
17 TestDownloadOnly/v1.32.1/LogsDuration 0.06
18 TestDownloadOnly/v1.32.1/DeleteAll 0.14
19 TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.6
22 TestOffline 80.96
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 134.37
31 TestAddons/serial/GCPAuth/Namespaces 0.14
32 TestAddons/serial/GCPAuth/FakeCredentials 9.51
35 TestAddons/parallel/Registry 16.41
37 TestAddons/parallel/InspektorGadget 10.71
38 TestAddons/parallel/MetricsServer 5.96
40 TestAddons/parallel/CSI 66.4
41 TestAddons/parallel/Headlamp 19.86
42 TestAddons/parallel/CloudSpanner 5.61
43 TestAddons/parallel/LocalPath 18.08
44 TestAddons/parallel/NvidiaDevicePlugin 5.7
45 TestAddons/parallel/Yakd 11.08
47 TestAddons/StoppedEnableDisable 91.22
48 TestCertOptions 74.26
49 TestCertExpiration 291.23
51 TestForceSystemdFlag 64.01
52 TestForceSystemdEnv 97.22
54 TestKVMDriverInstallOrUpdate 4.08
58 TestErrorSpam/setup 43.42
59 TestErrorSpam/start 0.33
60 TestErrorSpam/status 0.72
61 TestErrorSpam/pause 1.53
62 TestErrorSpam/unpause 1.68
63 TestErrorSpam/stop 5.35
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 53.32
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 41.61
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.1
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.38
75 TestFunctional/serial/CacheCmd/cache/add_local 1.94
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.63
80 TestFunctional/serial/CacheCmd/cache/delete 0.1
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
83 TestFunctional/serial/ExtraConfig 31.44
84 TestFunctional/serial/ComponentHealth 0.06
85 TestFunctional/serial/LogsCmd 1.35
86 TestFunctional/serial/LogsFileCmd 1.33
87 TestFunctional/serial/InvalidService 3.98
89 TestFunctional/parallel/ConfigCmd 0.34
90 TestFunctional/parallel/DashboardCmd 13.5
91 TestFunctional/parallel/DryRun 0.33
92 TestFunctional/parallel/InternationalLanguage 0.22
93 TestFunctional/parallel/StatusCmd 1.03
97 TestFunctional/parallel/ServiceCmdConnect 10.56
98 TestFunctional/parallel/AddonsCmd 0.14
101 TestFunctional/parallel/SSHCmd 0.42
102 TestFunctional/parallel/CpCmd 1.33
103 TestFunctional/parallel/MySQL 31.45
104 TestFunctional/parallel/FileSync 0.28
105 TestFunctional/parallel/CertSync 1.32
109 TestFunctional/parallel/NodeLabels 0.08
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.62
113 TestFunctional/parallel/License 0.24
123 TestFunctional/parallel/ServiceCmd/DeployApp 10.19
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.36
125 TestFunctional/parallel/ProfileCmd/profile_list 0.45
126 TestFunctional/parallel/ProfileCmd/profile_json_output 0.45
127 TestFunctional/parallel/MountCmd/any-port 9.09
128 TestFunctional/parallel/ServiceCmd/List 0.34
129 TestFunctional/parallel/MountCmd/specific-port 2.13
130 TestFunctional/parallel/ServiceCmd/JSONOutput 0.47
131 TestFunctional/parallel/ServiceCmd/HTTPS 0.28
132 TestFunctional/parallel/ServiceCmd/Format 0.4
133 TestFunctional/parallel/ServiceCmd/URL 0.33
134 TestFunctional/parallel/MountCmd/VerifyCleanup 0.88
135 TestFunctional/parallel/Version/short 0.05
136 TestFunctional/parallel/Version/components 0.46
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.58
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.31
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
141 TestFunctional/parallel/ImageCommands/ImageBuild 7.23
142 TestFunctional/parallel/ImageCommands/Setup 1.81
143 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
144 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
145 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.36
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.19
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.57
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.75
150 TestFunctional/parallel/ImageCommands/ImageRemove 1.28
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 3.24
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.55
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 196.68
161 TestMultiControlPlane/serial/DeployApp 5.99
162 TestMultiControlPlane/serial/PingHostFromPods 1.14
163 TestMultiControlPlane/serial/AddWorkerNode 52.6
164 TestMultiControlPlane/serial/NodeLabels 0.07
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.84
166 TestMultiControlPlane/serial/CopyFile 12.65
167 TestMultiControlPlane/serial/StopSecondaryNode 91.59
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.63
169 TestMultiControlPlane/serial/RestartSecondaryNode 53.52
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.84
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 427.42
172 TestMultiControlPlane/serial/DeleteSecondaryNode 17.89
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.61
174 TestMultiControlPlane/serial/StopCluster 272.47
175 TestMultiControlPlane/serial/RestartCluster 121.09
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.62
177 TestMultiControlPlane/serial/AddSecondaryNode 74.26
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.86
182 TestJSONOutput/start/Command 88.3
183 TestJSONOutput/start/Audit 0
185 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/pause/Command 0.65
189 TestJSONOutput/pause/Audit 0
191 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/unpause/Command 0.57
195 TestJSONOutput/unpause/Audit 0
197 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/stop/Command 7.36
201 TestJSONOutput/stop/Audit 0
203 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
205 TestErrorJSONOutput 0.2
210 TestMainNoArgs 0.04
211 TestMinikubeProfile 83.62
214 TestMountStart/serial/StartWithMountFirst 28.37
215 TestMountStart/serial/VerifyMountFirst 0.37
216 TestMountStart/serial/StartWithMountSecond 28.01
217 TestMountStart/serial/VerifyMountSecond 0.37
218 TestMountStart/serial/DeleteFirst 0.7
219 TestMountStart/serial/VerifyMountPostDelete 0.36
220 TestMountStart/serial/Stop 1.27
221 TestMountStart/serial/RestartStopped 23.13
222 TestMountStart/serial/VerifyMountPostStop 0.36
225 TestMultiNode/serial/FreshStart2Nodes 114.6
226 TestMultiNode/serial/DeployApp2Nodes 5.02
227 TestMultiNode/serial/PingHostFrom2Pods 0.75
228 TestMultiNode/serial/AddNode 51.28
229 TestMultiNode/serial/MultiNodeLabels 0.06
230 TestMultiNode/serial/ProfileList 0.57
231 TestMultiNode/serial/CopyFile 6.98
232 TestMultiNode/serial/StopNode 2.3
233 TestMultiNode/serial/StartAfterStop 37.85
234 TestMultiNode/serial/RestartKeepsNodes 337.98
235 TestMultiNode/serial/DeleteNode 2.69
236 TestMultiNode/serial/StopMultiNode 182.01
237 TestMultiNode/serial/RestartMultiNode 115.37
238 TestMultiNode/serial/ValidateNameConflict 45.06
245 TestScheduledStopUnix 113.26
249 TestRunningBinaryUpgrade 186.56
254 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
265 TestNoKubernetes/serial/StartWithK8s 65.93
270 TestNetworkPlugins/group/false 3.36
274 TestNoKubernetes/serial/StartWithStopK8s 37.72
275 TestNoKubernetes/serial/Start 47.77
276 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
277 TestNoKubernetes/serial/ProfileList 31.98
278 TestNoKubernetes/serial/Stop 1.43
279 TestNoKubernetes/serial/StartNoArgs 36.18
280 TestStoppedBinaryUpgrade/Setup 0.41
281 TestStoppedBinaryUpgrade/Upgrade 116.34
282 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
284 TestPause/serial/Start 104.11
286 TestStoppedBinaryUpgrade/MinikubeLogs 0.79
290 TestStartStop/group/no-preload/serial/FirstStart 99.62
292 TestStartStop/group/embed-certs/serial/FirstStart 92.01
293 TestStartStop/group/no-preload/serial/DeployApp 10.27
294 TestStartStop/group/embed-certs/serial/DeployApp 9.27
295 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.02
296 TestStartStop/group/no-preload/serial/Stop 91.02
297 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.96
298 TestStartStop/group/embed-certs/serial/Stop 91.01
299 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
300 TestStartStop/group/no-preload/serial/SecondStart 348.66
301 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
302 TestStartStop/group/embed-certs/serial/SecondStart 313.75
305 TestStartStop/group/old-k8s-version/serial/Stop 5.3
306 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
308 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
309 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
310 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
311 TestStartStop/group/embed-certs/serial/Pause 2.78
313 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 84.54
314 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 9.01
315 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.13
316 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.3
317 TestStartStop/group/no-preload/serial/Pause 2.96
319 TestStartStop/group/newest-cni/serial/FirstStart 45.28
320 TestStartStop/group/newest-cni/serial/DeployApp 0
321 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.16
322 TestStartStop/group/newest-cni/serial/Stop 10.68
323 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.28
324 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
325 TestStartStop/group/newest-cni/serial/SecondStart 36.38
326 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.03
327 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.03
328 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
329 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
330 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.21
331 TestStartStop/group/newest-cni/serial/Pause 2.31
332 TestNetworkPlugins/group/auto/Start 61.93
333 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
334 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 323.8
335 TestNetworkPlugins/group/auto/KubeletFlags 0.21
336 TestNetworkPlugins/group/auto/NetCatPod 10.22
337 TestNetworkPlugins/group/auto/DNS 0.17
338 TestNetworkPlugins/group/auto/Localhost 0.12
339 TestNetworkPlugins/group/auto/HairPin 0.12
340 TestNetworkPlugins/group/kindnet/Start 59.92
341 TestNetworkPlugins/group/kindnet/ControllerPod 6
342 TestNetworkPlugins/group/kindnet/KubeletFlags 0.2
343 TestNetworkPlugins/group/kindnet/NetCatPod 9.21
344 TestNetworkPlugins/group/kindnet/DNS 0.14
345 TestNetworkPlugins/group/kindnet/Localhost 0.1
346 TestNetworkPlugins/group/kindnet/HairPin 0.12
348 TestNetworkPlugins/group/calico/Start 80.48
349 TestNetworkPlugins/group/calico/ControllerPod 6.01
350 TestNetworkPlugins/group/calico/KubeletFlags 0.21
351 TestNetworkPlugins/group/calico/NetCatPod 11.27
352 TestNetworkPlugins/group/calico/DNS 0.14
353 TestNetworkPlugins/group/calico/Localhost 0.11
354 TestNetworkPlugins/group/calico/HairPin 0.12
355 TestNetworkPlugins/group/custom-flannel/Start 70.94
356 TestNetworkPlugins/group/flannel/Start 74.63
357 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.22
358 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.27
359 TestNetworkPlugins/group/custom-flannel/DNS 0.14
360 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
361 TestNetworkPlugins/group/custom-flannel/HairPin 0.1
362 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
363 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
364 TestNetworkPlugins/group/flannel/ControllerPod 6.01
365 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
366 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.08
367 TestNetworkPlugins/group/bridge/Start 53.64
368 TestNetworkPlugins/group/flannel/KubeletFlags 0.62
369 TestNetworkPlugins/group/flannel/NetCatPod 9.22
370 TestNetworkPlugins/group/enable-default-cni/Start 112.22
371 TestNetworkPlugins/group/flannel/DNS 0.19
372 TestNetworkPlugins/group/flannel/Localhost 0.14
373 TestNetworkPlugins/group/flannel/HairPin 0.12
374 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
375 TestNetworkPlugins/group/bridge/NetCatPod 11.24
376 TestNetworkPlugins/group/bridge/DNS 21.05
377 TestNetworkPlugins/group/bridge/Localhost 0.11
378 TestNetworkPlugins/group/bridge/HairPin 0.11
379 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.21
380 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.3
381 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
382 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
383 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
x
+
TestDownloadOnly/v1.20.0/json-events (8.63s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-374995 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-374995 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (8.62564359s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.63s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0205 02:03:42.752364   19989 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0205 02:03:42.752450   19989 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20363-12788/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-374995
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-374995: exit status 85 (61.23638ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-374995 | jenkins | v1.35.0 | 05 Feb 25 02:03 UTC |          |
	|         | -p download-only-374995        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/05 02:03:34
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0205 02:03:34.166989   20001 out.go:345] Setting OutFile to fd 1 ...
	I0205 02:03:34.167209   20001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 02:03:34.167217   20001 out.go:358] Setting ErrFile to fd 2...
	I0205 02:03:34.167222   20001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 02:03:34.167389   20001 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20363-12788/.minikube/bin
	W0205 02:03:34.167499   20001 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20363-12788/.minikube/config/config.json: open /home/jenkins/minikube-integration/20363-12788/.minikube/config/config.json: no such file or directory
	I0205 02:03:34.168070   20001 out.go:352] Setting JSON to true
	I0205 02:03:34.168932   20001 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":2765,"bootTime":1738718249,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0205 02:03:34.169033   20001 start.go:139] virtualization: kvm guest
	I0205 02:03:34.171275   20001 out.go:97] [download-only-374995] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	W0205 02:03:34.171384   20001 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20363-12788/.minikube/cache/preloaded-tarball: no such file or directory
	I0205 02:03:34.171419   20001 notify.go:220] Checking for updates...
	I0205 02:03:34.172747   20001 out.go:169] MINIKUBE_LOCATION=20363
	I0205 02:03:34.174032   20001 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0205 02:03:34.175330   20001 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20363-12788/kubeconfig
	I0205 02:03:34.176551   20001 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20363-12788/.minikube
	I0205 02:03:34.177766   20001 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0205 02:03:34.180029   20001 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0205 02:03:34.180231   20001 driver.go:394] Setting default libvirt URI to qemu:///system
	I0205 02:03:34.280117   20001 out.go:97] Using the kvm2 driver based on user configuration
	I0205 02:03:34.280140   20001 start.go:297] selected driver: kvm2
	I0205 02:03:34.280147   20001 start.go:901] validating driver "kvm2" against <nil>
	I0205 02:03:34.280464   20001 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0205 02:03:34.280597   20001 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20363-12788/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0205 02:03:34.295639   20001 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0205 02:03:34.295681   20001 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0205 02:03:34.296170   20001 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0205 02:03:34.296325   20001 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0205 02:03:34.296357   20001 cni.go:84] Creating CNI manager for ""
	I0205 02:03:34.296399   20001 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0205 02:03:34.296407   20001 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0205 02:03:34.296458   20001 start.go:340] cluster config:
	{Name:download-only-374995 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-374995 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0205 02:03:34.296619   20001 iso.go:125] acquiring lock: {Name:mk486603e8d6546d81b9e7d0a893261360630790 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0205 02:03:34.298636   20001 out.go:97] Downloading VM boot image ...
	I0205 02:03:34.298668   20001 download.go:108] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20363-12788/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0205 02:03:37.076368   20001 out.go:97] Starting "download-only-374995" primary control-plane node in "download-only-374995" cluster
	I0205 02:03:37.076416   20001 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0205 02:03:37.099758   20001 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0205 02:03:37.099798   20001 cache.go:56] Caching tarball of preloaded images
	I0205 02:03:37.099962   20001 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0205 02:03:37.101605   20001 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0205 02:03:37.101619   20001 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0205 02:03:37.132016   20001 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/20363-12788/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-374995 host does not exist
	  To start a cluster, run: "minikube start -p download-only-374995"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-374995
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/json-events (5.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-323625 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-323625 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (5.463473949s)
--- PASS: TestDownloadOnly/v1.32.1/json-events (5.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/preload-exists
I0205 02:03:48.546216   19989 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
I0205 02:03:48.546267   19989 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20363-12788/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-323625
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-323625: exit status 85 (61.667164ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-374995 | jenkins | v1.35.0 | 05 Feb 25 02:03 UTC |                     |
	|         | -p download-only-374995        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 05 Feb 25 02:03 UTC | 05 Feb 25 02:03 UTC |
	| delete  | -p download-only-374995        | download-only-374995 | jenkins | v1.35.0 | 05 Feb 25 02:03 UTC | 05 Feb 25 02:03 UTC |
	| start   | -o=json --download-only        | download-only-323625 | jenkins | v1.35.0 | 05 Feb 25 02:03 UTC |                     |
	|         | -p download-only-323625        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/05 02:03:43
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0205 02:03:43.123846   20211 out.go:345] Setting OutFile to fd 1 ...
	I0205 02:03:43.124070   20211 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 02:03:43.124078   20211 out.go:358] Setting ErrFile to fd 2...
	I0205 02:03:43.124083   20211 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 02:03:43.124267   20211 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20363-12788/.minikube/bin
	I0205 02:03:43.124820   20211 out.go:352] Setting JSON to true
	I0205 02:03:43.125684   20211 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":2774,"bootTime":1738718249,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0205 02:03:43.125774   20211 start.go:139] virtualization: kvm guest
	I0205 02:03:43.127873   20211 out.go:97] [download-only-323625] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0205 02:03:43.127973   20211 notify.go:220] Checking for updates...
	I0205 02:03:43.129401   20211 out.go:169] MINIKUBE_LOCATION=20363
	I0205 02:03:43.130883   20211 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0205 02:03:43.132219   20211 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20363-12788/kubeconfig
	I0205 02:03:43.133487   20211 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20363-12788/.minikube
	I0205 02:03:43.134758   20211 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0205 02:03:43.137184   20211 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0205 02:03:43.137485   20211 driver.go:394] Setting default libvirt URI to qemu:///system
	I0205 02:03:43.170238   20211 out.go:97] Using the kvm2 driver based on user configuration
	I0205 02:03:43.170265   20211 start.go:297] selected driver: kvm2
	I0205 02:03:43.170270   20211 start.go:901] validating driver "kvm2" against <nil>
	I0205 02:03:43.170570   20211 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0205 02:03:43.170664   20211 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20363-12788/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0205 02:03:43.185484   20211 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0205 02:03:43.185538   20211 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0205 02:03:43.186181   20211 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0205 02:03:43.186392   20211 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0205 02:03:43.186430   20211 cni.go:84] Creating CNI manager for ""
	I0205 02:03:43.186491   20211 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0205 02:03:43.186507   20211 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0205 02:03:43.186573   20211 start.go:340] cluster config:
	{Name:download-only-323625 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:download-only-323625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0205 02:03:43.186688   20211 iso.go:125] acquiring lock: {Name:mk486603e8d6546d81b9e7d0a893261360630790 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0205 02:03:43.188459   20211 out.go:97] Starting "download-only-323625" primary control-plane node in "download-only-323625" cluster
	I0205 02:03:43.188481   20211 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0205 02:03:43.216371   20211 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.1/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0205 02:03:43.216405   20211 cache.go:56] Caching tarball of preloaded images
	I0205 02:03:43.216568   20211 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0205 02:03:43.218298   20211 out.go:97] Downloading Kubernetes v1.32.1 preload ...
	I0205 02:03:43.218326   20211 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 ...
	I0205 02:03:43.243878   20211 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.1/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2af56a340efcc3949401b47b9a5d537 -> /home/jenkins/minikube-integration/20363-12788/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0205 02:03:47.006588   20211 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 ...
	I0205 02:03:47.006685   20211 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20363-12788/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 ...
	I0205 02:03:47.767063   20211 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0205 02:03:47.767411   20211 profile.go:143] Saving config to /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/download-only-323625/config.json ...
	I0205 02:03:47.767441   20211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/download-only-323625/config.json: {Name:mk2de248ed510963efbf47ddb7d3f1f4f3944baa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 02:03:47.767618   20211 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0205 02:03:47.767783   20211 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/20363-12788/.minikube/cache/linux/amd64/v1.32.1/kubectl
	
	
	* The control-plane node download-only-323625 host does not exist
	  To start a cluster, run: "minikube start -p download-only-323625"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.32.1/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-323625
--- PASS: TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
I0205 02:03:49.122855   19989 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-397529 --alsologtostderr --binary-mirror http://127.0.0.1:36929 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-397529" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-397529
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestOffline (80.96s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-269713 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-269713 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m19.101808409s)
helpers_test.go:175: Cleaning up "offline-crio-269713" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-269713
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-269713: (1.856109012s)
--- PASS: TestOffline (80.96s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-395572
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-395572: exit status 85 (54.220007ms)

                                                
                                                
-- stdout --
	* Profile "addons-395572" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-395572"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-395572
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-395572: exit status 85 (53.697463ms)

                                                
                                                
-- stdout --
	* Profile "addons-395572" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-395572"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (134.37s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-395572 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-395572 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m14.365027292s)
--- PASS: TestAddons/Setup (134.37s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-395572 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-395572 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.51s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-395572 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-395572 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fdadd699-3496-4062-9839-8a5f36de0948] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [fdadd699-3496-4062-9839-8a5f36de0948] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003439278s
addons_test.go:633: (dbg) Run:  kubectl --context addons-395572 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-395572 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-395572 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.51s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.41s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 7.076811ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-z8t9s" [00899170-2971-4aba-8699-bd3bc4501a36] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003219904s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-4hkv4" [5b4564c0-6a17-4a35-b52f-f28e1c4622a1] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003422966s
addons_test.go:331: (dbg) Run:  kubectl --context addons-395572 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-395572 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-395572 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.637754053s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-395572 ip
2025/02/05 02:06:38 [DEBUG] GET http://192.168.39.234:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-395572 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.41s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.71s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-t755q" [e907d229-8b09-448f-aac2-1c286fc2e489] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.007061308s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-395572 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-395572 addons disable inspektor-gadget --alsologtostderr -v=1: (5.697363795s)
--- PASS: TestAddons/parallel/InspektorGadget (10.71s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.96s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 5.904765ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
I0205 02:06:22.558818   19989 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0205 02:06:22.558845   19989 kapi.go:107] duration metric: took 6.995727ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
helpers_test.go:344: "metrics-server-7fbb699795-62dtn" [7ed1c285-e119-4991-b562-b48bc209460b] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004243308s
addons_test.go:402: (dbg) Run:  kubectl --context addons-395572 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-395572 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.96s)

                                                
                                    
x
+
TestAddons/parallel/CSI (66.4s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:488: csi-hostpath-driver pods stabilized in 7.007081ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-395572 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-395572 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-395572 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-395572 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-395572 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-395572 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-395572 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-395572 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-395572 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-395572 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-395572 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-395572 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-395572 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-395572 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-395572 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-395572 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-395572 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [315a0641-0a10-4195-bc50-9a36e4c9adef] Pending
helpers_test.go:344: "task-pv-pod" [315a0641-0a10-4195-bc50-9a36e4c9adef] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [315a0641-0a10-4195-bc50-9a36e4c9adef] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.00267292s
addons_test.go:511: (dbg) Run:  kubectl --context addons-395572 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-395572 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-395572 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-395572 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-395572 delete pod task-pv-pod: (1.785214563s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-395572 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-395572 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-395572 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-395572 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-395572 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-395572 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-395572 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-395572 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-395572 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-395572 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-395572 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-395572 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-395572 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-395572 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-395572 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-395572 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-395572 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-395572 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-395572 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-395572 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-395572 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [9c54bfcc-d8d0-407c-bc0a-eeedb77640f2] Pending
helpers_test.go:344: "task-pv-pod-restore" [9c54bfcc-d8d0-407c-bc0a-eeedb77640f2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [9c54bfcc-d8d0-407c-bc0a-eeedb77640f2] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.003763872s
addons_test.go:553: (dbg) Run:  kubectl --context addons-395572 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-395572 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-395572 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-395572 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-395572 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-395572 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.734349472s)
--- PASS: TestAddons/parallel/CSI (66.40s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-395572 --alsologtostderr -v=1
I0205 02:06:22.551861   19989 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-qqzq9" [6161dee0-4e92-4f84-ac45-4167b82e710c] Pending
helpers_test.go:344: "headlamp-5d4b5d7bd6-qqzq9" [6161dee0-4e92-4f84-ac45-4167b82e710c] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-qqzq9" [6161dee0-4e92-4f84-ac45-4167b82e710c] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.004049093s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-395572 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-395572 addons disable headlamp --alsologtostderr -v=1: (5.953856259s)
--- PASS: TestAddons/parallel/Headlamp (19.86s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.61s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5d76cffbc-pkgtg" [a1d4bfec-0e67-44ab-b682-fcb8e7ab8180] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.002976281s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-395572 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.61s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (18.08s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-395572 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-395572 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-395572 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-395572 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-395572 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-395572 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-395572 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-395572 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-395572 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-395572 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-395572 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-395572 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [cde9289a-21f6-40c1-9e82-2e94f7b013a7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [cde9289a-21f6-40c1-9e82-2e94f7b013a7] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [cde9289a-21f6-40c1-9e82-2e94f7b013a7] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 8.002578614s
addons_test.go:906: (dbg) Run:  kubectl --context addons-395572 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-395572 ssh "cat /opt/local-path-provisioner/pvc-fecc77c1-5a4e-42cb-af0d-0ce82b98a634_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-395572 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-395572 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-395572 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (18.08s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.7s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-2pc2d" [e2d6cd73-98b6-4f84-a95f-df50eed11a24] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.065391192s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-395572 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.70s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.08s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-76ghn" [3a8c652c-a610-4531-9433-b99e4e02efb6] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003659161s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-395572 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-395572 addons disable yakd --alsologtostderr -v=1: (6.07738214s)
--- PASS: TestAddons/parallel/Yakd (11.08s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91.22s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-395572
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-395572: (1m30.942609878s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-395572
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-395572
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-395572
--- PASS: TestAddons/StoppedEnableDisable (91.22s)

                                                
                                    
x
+
TestCertOptions (74.26s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-653669 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-653669 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m12.996913479s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-653669 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-653669 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-653669 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-653669" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-653669
--- PASS: TestCertOptions (74.26s)

                                                
                                    
x
+
TestCertExpiration (291.23s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-908105 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-908105 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m21.338238406s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-908105 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-908105 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (29.094313379s)
helpers_test.go:175: Cleaning up "cert-expiration-908105" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-908105
--- PASS: TestCertExpiration (291.23s)

                                                
                                    
x
+
TestForceSystemdFlag (64.01s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-467430 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-467430 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m2.80188943s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-467430 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-467430" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-467430
--- PASS: TestForceSystemdFlag (64.01s)

                                                
                                    
x
+
TestForceSystemdEnv (97.22s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-409141 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-409141 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m36.191642155s)
helpers_test.go:175: Cleaning up "force-systemd-env-409141" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-409141
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-409141: (1.027552084s)
--- PASS: TestForceSystemdEnv (97.22s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.08s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0205 03:06:12.953520   19989 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0205 03:06:12.953651   19989 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0205 03:06:12.980379   19989 install.go:62] docker-machine-driver-kvm2: exit status 1
W0205 03:06:12.980866   19989 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0205 03:06:12.980957   19989 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate82117604/001/docker-machine-driver-kvm2
I0205 03:06:13.189712   19989 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate82117604/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x54825c0 0x54825c0 0x54825c0 0x54825c0 0x54825c0 0x54825c0 0x54825c0] Decompressors:map[bz2:0xc000993478 gz:0xc000993500 tar:0xc0009934b0 tar.bz2:0xc0009934c0 tar.gz:0xc0009934d0 tar.xz:0xc0009934e0 tar.zst:0xc0009934f0 tbz2:0xc0009934c0 tgz:0xc0009934d0 txz:0xc0009934e0 tzst:0xc0009934f0 xz:0xc000993508 zip:0xc000993510 zst:0xc000993520] Getters:map[file:0xc001c352d0 http:0xc0008ce1e0 https:0xc0008ce230] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code
: 404. trying to get the common version
I0205 03:06:13.189780   19989 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate82117604/001/docker-machine-driver-kvm2
I0205 03:06:15.175566   19989 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0205 03:06:15.175651   19989 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0205 03:06:15.202948   19989 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0205 03:06:15.202977   19989 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0205 03:06:15.203047   19989 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0205 03:06:15.203076   19989 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate82117604/002/docker-machine-driver-kvm2
I0205 03:06:15.259251   19989 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate82117604/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x54825c0 0x54825c0 0x54825c0 0x54825c0 0x54825c0 0x54825c0 0x54825c0] Decompressors:map[bz2:0xc000993478 gz:0xc000993500 tar:0xc0009934b0 tar.bz2:0xc0009934c0 tar.gz:0xc0009934d0 tar.xz:0xc0009934e0 tar.zst:0xc0009934f0 tbz2:0xc0009934c0 tgz:0xc0009934d0 txz:0xc0009934e0 tzst:0xc0009934f0 xz:0xc000993508 zip:0xc000993510 zst:0xc000993520] Getters:map[file:0xc001e602f0 http:0xc00074e5f0 https:0xc00074e640] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code
: 404. trying to get the common version
I0205 03:06:15.259291   19989 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate82117604/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (4.08s)

                                                
                                    
x
+
TestErrorSpam/setup (43.42s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-809992 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-809992 --driver=kvm2  --container-runtime=crio
E0205 02:11:04.769637   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:11:04.776055   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:11:04.787460   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:11:04.808874   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:11:04.850233   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:11:04.931652   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:11:05.093163   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:11:05.414868   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:11:06.056875   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:11:07.338475   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:11:09.901333   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:11:15.022980   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:11:25.265231   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-809992 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-809992 --driver=kvm2  --container-runtime=crio: (43.417115279s)
--- PASS: TestErrorSpam/setup (43.42s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-809992 --log_dir /tmp/nospam-809992 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-809992 --log_dir /tmp/nospam-809992 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-809992 --log_dir /tmp/nospam-809992 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-809992 --log_dir /tmp/nospam-809992 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-809992 --log_dir /tmp/nospam-809992 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-809992 --log_dir /tmp/nospam-809992 status
--- PASS: TestErrorSpam/status (0.72s)

                                                
                                    
x
+
TestErrorSpam/pause (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-809992 --log_dir /tmp/nospam-809992 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-809992 --log_dir /tmp/nospam-809992 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-809992 --log_dir /tmp/nospam-809992 pause
--- PASS: TestErrorSpam/pause (1.53s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.68s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-809992 --log_dir /tmp/nospam-809992 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-809992 --log_dir /tmp/nospam-809992 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-809992 --log_dir /tmp/nospam-809992 unpause
--- PASS: TestErrorSpam/unpause (1.68s)

                                                
                                    
x
+
TestErrorSpam/stop (5.35s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-809992 --log_dir /tmp/nospam-809992 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-809992 --log_dir /tmp/nospam-809992 stop: (2.294509675s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-809992 --log_dir /tmp/nospam-809992 stop
E0205 02:11:45.747344   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-809992 --log_dir /tmp/nospam-809992 stop: (1.492352899s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-809992 --log_dir /tmp/nospam-809992 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-809992 --log_dir /tmp/nospam-809992 stop: (1.560564652s)
--- PASS: TestErrorSpam/stop (5.35s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: /home/jenkins/minikube-integration/20363-12788/.minikube/files/etc/test/nested/copy/19989/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (53.32s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-linux-amd64 start -p functional-910650 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0205 02:12:26.709494   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2251: (dbg) Done: out/minikube-linux-amd64 start -p functional-910650 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (53.319963301s)
--- PASS: TestFunctional/serial/StartWithProxy (53.32s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (41.61s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0205 02:12:41.369964   19989 config.go:182] Loaded profile config "functional-910650": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
functional_test.go:676: (dbg) Run:  out/minikube-linux-amd64 start -p functional-910650 --alsologtostderr -v=8
functional_test.go:676: (dbg) Done: out/minikube-linux-amd64 start -p functional-910650 --alsologtostderr -v=8: (41.61120885s)
functional_test.go:680: soft start took 41.612055398s for "functional-910650" cluster.
I0205 02:13:22.981481   19989 config.go:182] Loaded profile config "functional-910650": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/SoftStart (41.61s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-910650 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.38s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-910650 cache add registry.k8s.io/pause:3.1: (1.089625739s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-910650 cache add registry.k8s.io/pause:3.3: (1.247780108s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 cache add registry.k8s.io/pause:latest
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-910650 cache add registry.k8s.io/pause:latest: (1.039498414s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.38s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.94s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-910650 /tmp/TestFunctionalserialCacheCmdcacheadd_local2186283744/001
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 cache add minikube-local-cache-test:functional-910650
functional_test.go:1106: (dbg) Done: out/minikube-linux-amd64 -p functional-910650 cache add minikube-local-cache-test:functional-910650: (1.63352411s)
functional_test.go:1111: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 cache delete minikube-local-cache-test:functional-910650
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-910650
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.94s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.63s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-910650 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (210.383884ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 cache reload
functional_test.go:1180: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.63s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 kubectl -- --context functional-910650 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out/kubectl --context functional-910650 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (31.44s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-linux-amd64 start -p functional-910650 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0205 02:13:48.632586   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:774: (dbg) Done: out/minikube-linux-amd64 start -p functional-910650 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (31.444504645s)
functional_test.go:778: restart took 31.444622543s for "functional-910650" cluster.
I0205 02:14:02.137444   19989 config.go:182] Loaded profile config "functional-910650": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/ExtraConfig (31.44s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-910650 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 logs
functional_test.go:1253: (dbg) Done: out/minikube-linux-amd64 -p functional-910650 logs: (1.35331832s)
--- PASS: TestFunctional/serial/LogsCmd (1.35s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 logs --file /tmp/TestFunctionalserialLogsFileCmd1042194051/001/logs.txt
functional_test.go:1267: (dbg) Done: out/minikube-linux-amd64 -p functional-910650 logs --file /tmp/TestFunctionalserialLogsFileCmd1042194051/001/logs.txt: (1.331995673s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.33s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.98s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-910650 apply -f testdata/invalidsvc.yaml
functional_test.go:2352: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-910650
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-910650: exit status 115 (260.359357ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.25:31552 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-910650 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.98s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-910650 config get cpus: exit status 14 (51.005357ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-910650 config get cpus: exit status 14 (55.427606ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-910650 --alsologtostderr -v=1]
functional_test.go:927: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-910650 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 28280: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.50s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-910650 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:991: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-910650 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (174.959274ms)

                                                
                                                
-- stdout --
	* [functional-910650] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20363
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20363-12788/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20363-12788/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0205 02:14:21.540495   27731 out.go:345] Setting OutFile to fd 1 ...
	I0205 02:14:21.540623   27731 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 02:14:21.540629   27731 out.go:358] Setting ErrFile to fd 2...
	I0205 02:14:21.540635   27731 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 02:14:21.540912   27731 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20363-12788/.minikube/bin
	I0205 02:14:21.541645   27731 out.go:352] Setting JSON to false
	I0205 02:14:21.542963   27731 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3413,"bootTime":1738718249,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0205 02:14:21.543045   27731 start.go:139] virtualization: kvm guest
	I0205 02:14:21.544941   27731 out.go:177] * [functional-910650] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0205 02:14:21.546095   27731 out.go:177]   - MINIKUBE_LOCATION=20363
	I0205 02:14:21.546122   27731 notify.go:220] Checking for updates...
	I0205 02:14:21.548310   27731 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0205 02:14:21.549478   27731 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20363-12788/kubeconfig
	I0205 02:14:21.550728   27731 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20363-12788/.minikube
	I0205 02:14:21.551839   27731 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0205 02:14:21.552995   27731 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0205 02:14:21.554512   27731 config.go:182] Loaded profile config "functional-910650": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 02:14:21.554878   27731 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:14:21.554951   27731 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:14:21.585186   27731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37881
	I0205 02:14:21.585614   27731 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:14:21.586348   27731 main.go:141] libmachine: Using API Version  1
	I0205 02:14:21.586375   27731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:14:21.586803   27731 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:14:21.586964   27731 main.go:141] libmachine: (functional-910650) Calling .DriverName
	I0205 02:14:21.587208   27731 driver.go:394] Setting default libvirt URI to qemu:///system
	I0205 02:14:21.587642   27731 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:14:21.587685   27731 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:14:21.602667   27731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43675
	I0205 02:14:21.603145   27731 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:14:21.603679   27731 main.go:141] libmachine: Using API Version  1
	I0205 02:14:21.603700   27731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:14:21.604247   27731 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:14:21.604488   27731 main.go:141] libmachine: (functional-910650) Calling .DriverName
	I0205 02:14:21.640130   27731 out.go:177] * Using the kvm2 driver based on existing profile
	I0205 02:14:21.641302   27731 start.go:297] selected driver: kvm2
	I0205 02:14:21.641324   27731 start.go:901] validating driver "kvm2" against &{Name:functional-910650 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-910650 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.25 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0205 02:14:21.641485   27731 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0205 02:14:21.643670   27731 out.go:201] 
	W0205 02:14:21.644670   27731 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0205 02:14:21.645741   27731 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:1008: (dbg) Run:  out/minikube-linux-amd64 start -p functional-910650 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 start -p functional-910650 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-910650 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (215.095898ms)

                                                
                                                
-- stdout --
	* [functional-910650] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20363
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20363-12788/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20363-12788/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0205 02:14:21.306634   27686 out.go:345] Setting OutFile to fd 1 ...
	I0205 02:14:21.306766   27686 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 02:14:21.306776   27686 out.go:358] Setting ErrFile to fd 2...
	I0205 02:14:21.306780   27686 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 02:14:21.307049   27686 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20363-12788/.minikube/bin
	I0205 02:14:21.307593   27686 out.go:352] Setting JSON to false
	I0205 02:14:21.308543   27686 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3412,"bootTime":1738718249,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0205 02:14:21.308643   27686 start.go:139] virtualization: kvm guest
	I0205 02:14:21.330929   27686 out.go:177] * [functional-910650] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0205 02:14:21.385122   27686 notify.go:220] Checking for updates...
	I0205 02:14:21.385206   27686 out.go:177]   - MINIKUBE_LOCATION=20363
	I0205 02:14:21.386704   27686 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0205 02:14:21.388052   27686 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20363-12788/kubeconfig
	I0205 02:14:21.389536   27686 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20363-12788/.minikube
	I0205 02:14:21.390674   27686 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0205 02:14:21.391866   27686 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0205 02:14:21.393604   27686 config.go:182] Loaded profile config "functional-910650": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 02:14:21.394237   27686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:14:21.394326   27686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:14:21.410384   27686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44489
	I0205 02:14:21.410858   27686 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:14:21.411467   27686 main.go:141] libmachine: Using API Version  1
	I0205 02:14:21.411492   27686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:14:21.411867   27686 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:14:21.412036   27686 main.go:141] libmachine: (functional-910650) Calling .DriverName
	I0205 02:14:21.412315   27686 driver.go:394] Setting default libvirt URI to qemu:///system
	I0205 02:14:21.412708   27686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:14:21.412767   27686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:14:21.429039   27686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41305
	I0205 02:14:21.429422   27686 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:14:21.429903   27686 main.go:141] libmachine: Using API Version  1
	I0205 02:14:21.429929   27686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:14:21.430217   27686 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:14:21.430435   27686 main.go:141] libmachine: (functional-910650) Calling .DriverName
	I0205 02:14:21.463660   27686 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0205 02:14:21.464727   27686 start.go:297] selected driver: kvm2
	I0205 02:14:21.464753   27686 start.go:901] validating driver "kvm2" against &{Name:functional-910650 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-910650 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.25 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0205 02:14:21.464883   27686 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0205 02:14:21.467280   27686 out.go:201] 
	W0205 02:14:21.468370   27686 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0205 02:14:21.469483   27686 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 status
functional_test.go:877: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:889: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1646: (dbg) Run:  kubectl --context functional-910650 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-910650 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-k6qhb" [5dd71066-2b27-4030-ad19-eec45fdb7bac] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-k6qhb" [5dd71066-2b27-4030-ad19-eec45fdb7bac] Running
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.003792984s
functional_test.go:1666: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 service hello-node-connect --url
functional_test.go:1672: found endpoint for hello-node-connect: http://192.168.39.25:30733
functional_test.go:1692: http://192.168.39.25:30733: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-k6qhb

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.25:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.25:30733
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.56s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 ssh "echo hello"
functional_test.go:1759: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 ssh -n functional-910650 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 cp functional-910650:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4040620796/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 ssh -n functional-910650 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 ssh -n functional-910650 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (31.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1810: (dbg) Run:  kubectl --context functional-910650 replace --force -f testdata/mysql.yaml
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-dq6dw" [ef4e7751-e58b-4b47-a709-8c168fcf136a] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-58ccfd96bb-dq6dw" [ef4e7751-e58b-4b47-a709-8c168fcf136a] Running
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 30.002758607s
functional_test.go:1824: (dbg) Run:  kubectl --context functional-910650 exec mysql-58ccfd96bb-dq6dw -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-910650 exec mysql-58ccfd96bb-dq6dw -- mysql -ppassword -e "show databases;": exit status 1 (131.047542ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0205 02:14:53.993414   19989 retry.go:31] will retry after 981.5451ms: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-910650 exec mysql-58ccfd96bb-dq6dw -- mysql -ppassword -e "show databases;"
E0205 02:16:04.764518   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:16:32.474674   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/MySQL (31.45s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/19989/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 ssh "sudo cat /etc/test/nested/copy/19989/hosts"
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/19989.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 ssh "sudo cat /etc/ssl/certs/19989.pem"
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/19989.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 ssh "sudo cat /usr/share/ca-certificates/19989.pem"
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/199892.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 ssh "sudo cat /etc/ssl/certs/199892.pem"
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/199892.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 ssh "sudo cat /usr/share/ca-certificates/199892.pem"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-910650 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 ssh "sudo systemctl is-active docker"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-910650 ssh "sudo systemctl is-active docker": exit status 1 (273.779915ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 ssh "sudo systemctl is-active containerd"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-910650 ssh "sudo systemctl is-active containerd": exit status 1 (345.199886ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-910650 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-910650 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-wrkgl" [86972029-8f91-48c5-8edf-8aa85ed26ca7] Pending
helpers_test.go:344: "hello-node-fcfd88b6f-wrkgl" [86972029-8f91-48c5-8edf-8aa85ed26ca7] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-wrkgl" [86972029-8f91-48c5-8edf-8aa85ed26ca7] Running
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.003278356s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.19s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1332: Took "397.570956ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1346: Took "49.518163ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1383: Took "401.930417ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1396: Took "48.954213ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-910650 /tmp/TestFunctionalparallelMountCmdany-port338364960/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1738721651458407300" to /tmp/TestFunctionalparallelMountCmdany-port338364960/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1738721651458407300" to /tmp/TestFunctionalparallelMountCmdany-port338364960/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1738721651458407300" to /tmp/TestFunctionalparallelMountCmdany-port338364960/001/test-1738721651458407300
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-910650 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (256.62855ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0205 02:14:11.715395   19989 retry.go:31] will retry after 538.339688ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb  5 02:14 created-by-test
-rw-r--r-- 1 docker docker 24 Feb  5 02:14 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb  5 02:14 test-1738721651458407300
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 ssh cat /mount-9p/test-1738721651458407300
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-910650 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [953b0a2f-4dbb-4fcc-85c1-f51d97e97e61] Pending
helpers_test.go:344: "busybox-mount" [953b0a2f-4dbb-4fcc-85c1-f51d97e97e61] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [953b0a2f-4dbb-4fcc-85c1-f51d97e97e61] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [953b0a2f-4dbb-4fcc-85c1-f51d97e97e61] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.002589116s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-910650 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-910650 /tmp/TestFunctionalparallelMountCmdany-port338364960/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-910650 /tmp/TestFunctionalparallelMountCmdspecific-port3433707166/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-910650 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (261.839961ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0205 02:14:20.809714   19989 retry.go:31] will retry after 681.485268ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-910650 /tmp/TestFunctionalparallelMountCmdspecific-port3433707166/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-910650 ssh "sudo umount -f /mount-9p": exit status 1 (281.850726ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-910650 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-910650 /tmp/TestFunctionalparallelMountCmdspecific-port3433707166/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 service list -o json
functional_test.go:1511: Took "470.547088ms" to run "out/minikube-linux-amd64 -p functional-910650 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 service --namespace=default --https --url hello-node
functional_test.go:1539: found endpoint: https://192.168.39.25:32671
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 service hello-node --url
functional_test.go:1582: found endpoint for hello-node: http://192.168.39.25:32671
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-910650 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1095999341/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-910650 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1095999341/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-910650 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1095999341/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-910650 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-910650 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1095999341/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-910650 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1095999341/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-910650 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1095999341/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 version --short
2025/02/05 02:14:34 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 image ls --format short --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-910650 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.1
registry.k8s.io/kube-proxy:v1.32.1
registry.k8s.io/kube-controller-manager:v1.32.1
registry.k8s.io/kube-apiserver:v1.32.1
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-910650
localhost/kicbase/echo-server:functional-910650
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/kindest/kindnetd:v20241108-5c6d2daf
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-910650 image ls --format short --alsologtostderr:
I0205 02:14:35.525738   28818 out.go:345] Setting OutFile to fd 1 ...
I0205 02:14:35.526056   28818 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0205 02:14:35.526069   28818 out.go:358] Setting ErrFile to fd 2...
I0205 02:14:35.526075   28818 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0205 02:14:35.526398   28818 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20363-12788/.minikube/bin
I0205 02:14:35.527190   28818 config.go:182] Loaded profile config "functional-910650": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0205 02:14:35.527348   28818 config.go:182] Loaded profile config "functional-910650": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0205 02:14:35.527881   28818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0205 02:14:35.527961   28818 main.go:141] libmachine: Launching plugin server for driver kvm2
I0205 02:14:35.544945   28818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37901
I0205 02:14:35.545381   28818 main.go:141] libmachine: () Calling .GetVersion
I0205 02:14:35.545955   28818 main.go:141] libmachine: Using API Version  1
I0205 02:14:35.545974   28818 main.go:141] libmachine: () Calling .SetConfigRaw
I0205 02:14:35.546324   28818 main.go:141] libmachine: () Calling .GetMachineName
I0205 02:14:35.546515   28818 main.go:141] libmachine: (functional-910650) Calling .GetState
I0205 02:14:35.548255   28818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0205 02:14:35.548291   28818 main.go:141] libmachine: Launching plugin server for driver kvm2
I0205 02:14:35.561840   28818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43021
I0205 02:14:35.562182   28818 main.go:141] libmachine: () Calling .GetVersion
I0205 02:14:35.562619   28818 main.go:141] libmachine: Using API Version  1
I0205 02:14:35.562641   28818 main.go:141] libmachine: () Calling .SetConfigRaw
I0205 02:14:35.562935   28818 main.go:141] libmachine: () Calling .GetMachineName
I0205 02:14:35.563106   28818 main.go:141] libmachine: (functional-910650) Calling .DriverName
I0205 02:14:35.563302   28818 ssh_runner.go:195] Run: systemctl --version
I0205 02:14:35.563335   28818 main.go:141] libmachine: (functional-910650) Calling .GetSSHHostname
I0205 02:14:35.566185   28818 main.go:141] libmachine: (functional-910650) DBG | domain functional-910650 has defined MAC address 52:54:00:d5:92:0a in network mk-functional-910650
I0205 02:14:35.566606   28818 main.go:141] libmachine: (functional-910650) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:92:0a", ip: ""} in network mk-functional-910650: {Iface:virbr1 ExpiryTime:2025-02-05 03:12:02 +0000 UTC Type:0 Mac:52:54:00:d5:92:0a Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:functional-910650 Clientid:01:52:54:00:d5:92:0a}
I0205 02:14:35.566634   28818 main.go:141] libmachine: (functional-910650) DBG | domain functional-910650 has defined IP address 192.168.39.25 and MAC address 52:54:00:d5:92:0a in network mk-functional-910650
I0205 02:14:35.566746   28818 main.go:141] libmachine: (functional-910650) Calling .GetSSHPort
I0205 02:14:35.566911   28818 main.go:141] libmachine: (functional-910650) Calling .GetSSHKeyPath
I0205 02:14:35.567087   28818 main.go:141] libmachine: (functional-910650) Calling .GetSSHUsername
I0205 02:14:35.567232   28818 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/functional-910650/id_rsa Username:docker}
I0205 02:14:35.647744   28818 ssh_runner.go:195] Run: sudo crictl images --output json
I0205 02:14:35.684720   28818 main.go:141] libmachine: Making call to close driver server
I0205 02:14:35.684733   28818 main.go:141] libmachine: (functional-910650) Calling .Close
I0205 02:14:35.685064   28818 main.go:141] libmachine: (functional-910650) DBG | Closing plugin on server side
I0205 02:14:35.685063   28818 main.go:141] libmachine: Successfully made call to close driver server
I0205 02:14:35.685106   28818 main.go:141] libmachine: Making call to close connection to plugin binary
I0205 02:14:35.685123   28818 main.go:141] libmachine: Making call to close driver server
I0205 02:14:35.685140   28818 main.go:141] libmachine: (functional-910650) Calling .Close
I0205 02:14:35.685413   28818 main.go:141] libmachine: Successfully made call to close driver server
I0205 02:14:35.685454   28818 main.go:141] libmachine: (functional-910650) DBG | Closing plugin on server side
I0205 02:14:35.685461   28818 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 image ls --format table --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-910650 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-proxy              | v1.32.1            | e29f9c7391fd9 | 95.3MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| localhost/kicbase/echo-server           | functional-910650  | 9056ab77afb8e | 4.94MB |
| localhost/minikube-local-cache-test     | functional-910650  | 945f6c6fff336 | 3.33kB |
| registry.k8s.io/kube-controller-manager | v1.32.1            | 019ee182b58e2 | 90.8MB |
| registry.k8s.io/kube-scheduler          | v1.32.1            | 2b0d6572d062c | 70.6MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| docker.io/library/nginx                 | latest             | c59e925d63f3a | 196MB  |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/kube-apiserver          | v1.32.1            | 95c0bda56fc4d | 98.1MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/etcd                    | 3.5.16-0           | a9e7e6b294baf | 151MB  |
| docker.io/kindest/kindnetd              | v20241108-5c6d2daf | 50415e5d05f05 | 95MB   |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-910650 image ls --format table --alsologtostderr:
I0205 02:14:36.090552   28941 out.go:345] Setting OutFile to fd 1 ...
I0205 02:14:36.090811   28941 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0205 02:14:36.090821   28941 out.go:358] Setting ErrFile to fd 2...
I0205 02:14:36.090825   28941 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0205 02:14:36.090998   28941 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20363-12788/.minikube/bin
I0205 02:14:36.091648   28941 config.go:182] Loaded profile config "functional-910650": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0205 02:14:36.091770   28941 config.go:182] Loaded profile config "functional-910650": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0205 02:14:36.092172   28941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0205 02:14:36.092248   28941 main.go:141] libmachine: Launching plugin server for driver kvm2
I0205 02:14:36.107163   28941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36629
I0205 02:14:36.107680   28941 main.go:141] libmachine: () Calling .GetVersion
I0205 02:14:36.108328   28941 main.go:141] libmachine: Using API Version  1
I0205 02:14:36.108357   28941 main.go:141] libmachine: () Calling .SetConfigRaw
I0205 02:14:36.108688   28941 main.go:141] libmachine: () Calling .GetMachineName
I0205 02:14:36.108875   28941 main.go:141] libmachine: (functional-910650) Calling .GetState
I0205 02:14:36.110689   28941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0205 02:14:36.110735   28941 main.go:141] libmachine: Launching plugin server for driver kvm2
I0205 02:14:36.125618   28941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46163
I0205 02:14:36.126028   28941 main.go:141] libmachine: () Calling .GetVersion
I0205 02:14:36.126495   28941 main.go:141] libmachine: Using API Version  1
I0205 02:14:36.126519   28941 main.go:141] libmachine: () Calling .SetConfigRaw
I0205 02:14:36.126849   28941 main.go:141] libmachine: () Calling .GetMachineName
I0205 02:14:36.127036   28941 main.go:141] libmachine: (functional-910650) Calling .DriverName
I0205 02:14:36.127253   28941 ssh_runner.go:195] Run: systemctl --version
I0205 02:14:36.127281   28941 main.go:141] libmachine: (functional-910650) Calling .GetSSHHostname
I0205 02:14:36.129875   28941 main.go:141] libmachine: (functional-910650) DBG | domain functional-910650 has defined MAC address 52:54:00:d5:92:0a in network mk-functional-910650
I0205 02:14:36.130264   28941 main.go:141] libmachine: (functional-910650) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:92:0a", ip: ""} in network mk-functional-910650: {Iface:virbr1 ExpiryTime:2025-02-05 03:12:02 +0000 UTC Type:0 Mac:52:54:00:d5:92:0a Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:functional-910650 Clientid:01:52:54:00:d5:92:0a}
I0205 02:14:36.130298   28941 main.go:141] libmachine: (functional-910650) DBG | domain functional-910650 has defined IP address 192.168.39.25 and MAC address 52:54:00:d5:92:0a in network mk-functional-910650
I0205 02:14:36.130417   28941 main.go:141] libmachine: (functional-910650) Calling .GetSSHPort
I0205 02:14:36.130606   28941 main.go:141] libmachine: (functional-910650) Calling .GetSSHKeyPath
I0205 02:14:36.130763   28941 main.go:141] libmachine: (functional-910650) Calling .GetSSHUsername
I0205 02:14:36.130894   28941 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/functional-910650/id_rsa Username:docker}
I0205 02:14:36.230671   28941 ssh_runner.go:195] Run: sudo crictl images --output json
I0205 02:14:36.621050   28941 main.go:141] libmachine: Making call to close driver server
I0205 02:14:36.621066   28941 main.go:141] libmachine: (functional-910650) Calling .Close
I0205 02:14:36.621356   28941 main.go:141] libmachine: (functional-910650) DBG | Closing plugin on server side
I0205 02:14:36.621381   28941 main.go:141] libmachine: Successfully made call to close driver server
I0205 02:14:36.621398   28941 main.go:141] libmachine: Making call to close connection to plugin binary
I0205 02:14:36.621413   28941 main.go:141] libmachine: Making call to close driver server
I0205 02:14:36.621427   28941 main.go:141] libmachine: (functional-910650) Calling .Close
I0205 02:14:36.621656   28941 main.go:141] libmachine: Successfully made call to close driver server
I0205 02:14:36.621670   28941 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 image ls --format json --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-910650 image ls --format json --alsologtostderr:
[{"id":"945f6c6fff3363198c53405e4d82074c66dc8667c3babf5b3b421caaae0b0999","repoDigests":["localhost/minikube-local-cache-test@sha256:a14db74cbfcd6232124f43bee9b0bdd3420d8faac62e7ee50547ab42374b96fc"],"repoTags":["localhost/minikube-local-cache-test:functional-910650"],"size":"3330"},{"id":"a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","repoDigests":["registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990","registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"151021823"},{"id":"e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a","repoDigests":["registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5","registry.k8s.io/kube-proxy@sha256:a739122f1b5b17e2db96006120ad5fb9a3c654da07322bcaa62263c403ef69a8"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.1"],"size":"95271321"},{"id":"da86e6ba6ca197bf6bc5e
9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e","repoDigests":["docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3","docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"],"repoTags":["docker.io/kindest/kindnetd:v20241108-5c6d2daf"],"size":"94963761"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-910650"],"size":"4943877"},{"id":"019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d3
2d8bb04b8484ac6a92484c298dc695025954","registry.k8s.io/kube-controller-manager@sha256:c9067d10dcf5ca45b2be9260f3b15e9c94e05fd8039c53341a23d3b4cf0cc619"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.1"],"size":"90793286"},{"id":"2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e","registry.k8s.io/kube-scheduler@sha256:e2b8e00ff17f8b0427e34d28897d7bf6f7a63ec48913ea01d4082ab91ca28476"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.1"],"size":"70649158"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"c59e925d63f3aa135bfa9d82cb03fba9ee30edb22ebe6c9d4f43824312ba3d9b","repoDigests":["docker.io/library/nginx@sha256:bc2f6a7c8ddbccf55bdb19659ce3b0a92ca6559e86d42677a5a02e
f6bda2fcef","docker.io/library/nginx@sha256:da837cfb72cb98fdd8efa8f2c8d3d29b89b327c07d45bf564d52835d787a0892"],"repoTags":["docker.io/library/nginx:latest"],"size":"195872148"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a","repoDigests":["registry.k8s.io/kube-apiserver@sha256:769a11bfd73df7db947d51b0f7a3a60383a0338904d6944cced924d33f0d7286","registry.k8s.io/kube-ap
iserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.1"],"size":"98051552"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf98
2b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/
pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"}]
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-910650 image ls --format json --alsologtostderr:
I0205 02:14:35.773798   28875 out.go:345] Setting OutFile to fd 1 ...
I0205 02:14:35.774164   28875 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0205 02:14:35.774177   28875 out.go:358] Setting ErrFile to fd 2...
I0205 02:14:35.774185   28875 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0205 02:14:35.774625   28875 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20363-12788/.minikube/bin
I0205 02:14:35.775842   28875 config.go:182] Loaded profile config "functional-910650": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0205 02:14:35.775954   28875 config.go:182] Loaded profile config "functional-910650": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0205 02:14:35.776371   28875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0205 02:14:35.776411   28875 main.go:141] libmachine: Launching plugin server for driver kvm2
I0205 02:14:35.793176   28875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40213
I0205 02:14:35.793699   28875 main.go:141] libmachine: () Calling .GetVersion
I0205 02:14:35.794256   28875 main.go:141] libmachine: Using API Version  1
I0205 02:14:35.794280   28875 main.go:141] libmachine: () Calling .SetConfigRaw
I0205 02:14:35.794714   28875 main.go:141] libmachine: () Calling .GetMachineName
I0205 02:14:35.794904   28875 main.go:141] libmachine: (functional-910650) Calling .GetState
I0205 02:14:35.797366   28875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0205 02:14:35.797413   28875 main.go:141] libmachine: Launching plugin server for driver kvm2
I0205 02:14:35.817259   28875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33857
I0205 02:14:35.817720   28875 main.go:141] libmachine: () Calling .GetVersion
I0205 02:14:35.818198   28875 main.go:141] libmachine: Using API Version  1
I0205 02:14:35.818217   28875 main.go:141] libmachine: () Calling .SetConfigRaw
I0205 02:14:35.818641   28875 main.go:141] libmachine: () Calling .GetMachineName
I0205 02:14:35.818809   28875 main.go:141] libmachine: (functional-910650) Calling .DriverName
I0205 02:14:35.819021   28875 ssh_runner.go:195] Run: systemctl --version
I0205 02:14:35.819043   28875 main.go:141] libmachine: (functional-910650) Calling .GetSSHHostname
I0205 02:14:35.822011   28875 main.go:141] libmachine: (functional-910650) DBG | domain functional-910650 has defined MAC address 52:54:00:d5:92:0a in network mk-functional-910650
I0205 02:14:35.822546   28875 main.go:141] libmachine: (functional-910650) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:92:0a", ip: ""} in network mk-functional-910650: {Iface:virbr1 ExpiryTime:2025-02-05 03:12:02 +0000 UTC Type:0 Mac:52:54:00:d5:92:0a Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:functional-910650 Clientid:01:52:54:00:d5:92:0a}
I0205 02:14:35.822566   28875 main.go:141] libmachine: (functional-910650) DBG | domain functional-910650 has defined IP address 192.168.39.25 and MAC address 52:54:00:d5:92:0a in network mk-functional-910650
I0205 02:14:35.822740   28875 main.go:141] libmachine: (functional-910650) Calling .GetSSHPort
I0205 02:14:35.822881   28875 main.go:141] libmachine: (functional-910650) Calling .GetSSHKeyPath
I0205 02:14:35.823000   28875 main.go:141] libmachine: (functional-910650) Calling .GetSSHUsername
I0205 02:14:35.823107   28875 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/functional-910650/id_rsa Username:docker}
I0205 02:14:35.934716   28875 ssh_runner.go:195] Run: sudo crictl images --output json
I0205 02:14:36.038829   28875 main.go:141] libmachine: Making call to close driver server
I0205 02:14:36.038846   28875 main.go:141] libmachine: (functional-910650) Calling .Close
I0205 02:14:36.039145   28875 main.go:141] libmachine: (functional-910650) DBG | Closing plugin on server side
I0205 02:14:36.039187   28875 main.go:141] libmachine: Successfully made call to close driver server
I0205 02:14:36.039203   28875 main.go:141] libmachine: Making call to close connection to plugin binary
I0205 02:14:36.039212   28875 main.go:141] libmachine: Making call to close driver server
I0205 02:14:36.039223   28875 main.go:141] libmachine: (functional-910650) Calling .Close
I0205 02:14:36.039431   28875 main.go:141] libmachine: Successfully made call to close driver server
I0205 02:14:36.039445   28875 main.go:141] libmachine: Making call to close connection to plugin binary
I0205 02:14:36.039461   28875 main.go:141] libmachine: (functional-910650) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 image ls --format yaml --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-910650 image ls --format yaml --alsologtostderr:
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:769a11bfd73df7db947d51b0f7a3a60383a0338904d6944cced924d33f0d7286
- registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.1
size: "98051552"
- id: 019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954
- registry.k8s.io/kube-controller-manager@sha256:c9067d10dcf5ca45b2be9260f3b15e9c94e05fd8039c53341a23d3b4cf0cc619
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.1
size: "90793286"
- id: e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5
- registry.k8s.io/kube-proxy@sha256:a739122f1b5b17e2db96006120ad5fb9a3c654da07322bcaa62263c403ef69a8
repoTags:
- registry.k8s.io/kube-proxy:v1.32.1
size: "95271321"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-910650
size: "4943877"
- id: 945f6c6fff3363198c53405e4d82074c66dc8667c3babf5b3b421caaae0b0999
repoDigests:
- localhost/minikube-local-cache-test@sha256:a14db74cbfcd6232124f43bee9b0bdd3420d8faac62e7ee50547ab42374b96fc
repoTags:
- localhost/minikube-local-cache-test:functional-910650
size: "3330"
- id: 2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e
- registry.k8s.io/kube-scheduler@sha256:e2b8e00ff17f8b0427e34d28897d7bf6f7a63ec48913ea01d4082ab91ca28476
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.1
size: "70649158"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests:
- registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "151021823"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e
repoDigests:
- docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3
- docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108
repoTags:
- docker.io/kindest/kindnetd:v20241108-5c6d2daf
size: "94963761"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: c59e925d63f3aa135bfa9d82cb03fba9ee30edb22ebe6c9d4f43824312ba3d9b
repoDigests:
- docker.io/library/nginx@sha256:bc2f6a7c8ddbccf55bdb19659ce3b0a92ca6559e86d42677a5a02ef6bda2fcef
- docker.io/library/nginx@sha256:da837cfb72cb98fdd8efa8f2c8d3d29b89b327c07d45bf564d52835d787a0892
repoTags:
- docker.io/library/nginx:latest
size: "195872148"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-910650 image ls --format yaml --alsologtostderr:
I0205 02:14:35.543602   28829 out.go:345] Setting OutFile to fd 1 ...
I0205 02:14:35.543701   28829 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0205 02:14:35.543712   28829 out.go:358] Setting ErrFile to fd 2...
I0205 02:14:35.543717   28829 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0205 02:14:35.543922   28829 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20363-12788/.minikube/bin
I0205 02:14:35.544700   28829 config.go:182] Loaded profile config "functional-910650": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0205 02:14:35.544860   28829 config.go:182] Loaded profile config "functional-910650": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0205 02:14:35.545378   28829 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0205 02:14:35.545424   28829 main.go:141] libmachine: Launching plugin server for driver kvm2
I0205 02:14:35.559563   28829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39715
I0205 02:14:35.559939   28829 main.go:141] libmachine: () Calling .GetVersion
I0205 02:14:35.560518   28829 main.go:141] libmachine: Using API Version  1
I0205 02:14:35.560546   28829 main.go:141] libmachine: () Calling .SetConfigRaw
I0205 02:14:35.560871   28829 main.go:141] libmachine: () Calling .GetMachineName
I0205 02:14:35.561095   28829 main.go:141] libmachine: (functional-910650) Calling .GetState
I0205 02:14:35.562980   28829 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0205 02:14:35.563023   28829 main.go:141] libmachine: Launching plugin server for driver kvm2
I0205 02:14:35.578028   28829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39047
I0205 02:14:35.578376   28829 main.go:141] libmachine: () Calling .GetVersion
I0205 02:14:35.578821   28829 main.go:141] libmachine: Using API Version  1
I0205 02:14:35.578841   28829 main.go:141] libmachine: () Calling .SetConfigRaw
I0205 02:14:35.579155   28829 main.go:141] libmachine: () Calling .GetMachineName
I0205 02:14:35.579363   28829 main.go:141] libmachine: (functional-910650) Calling .DriverName
I0205 02:14:35.579559   28829 ssh_runner.go:195] Run: systemctl --version
I0205 02:14:35.579584   28829 main.go:141] libmachine: (functional-910650) Calling .GetSSHHostname
I0205 02:14:35.582241   28829 main.go:141] libmachine: (functional-910650) DBG | domain functional-910650 has defined MAC address 52:54:00:d5:92:0a in network mk-functional-910650
I0205 02:14:35.582624   28829 main.go:141] libmachine: (functional-910650) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:92:0a", ip: ""} in network mk-functional-910650: {Iface:virbr1 ExpiryTime:2025-02-05 03:12:02 +0000 UTC Type:0 Mac:52:54:00:d5:92:0a Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:functional-910650 Clientid:01:52:54:00:d5:92:0a}
I0205 02:14:35.582660   28829 main.go:141] libmachine: (functional-910650) DBG | domain functional-910650 has defined IP address 192.168.39.25 and MAC address 52:54:00:d5:92:0a in network mk-functional-910650
I0205 02:14:35.582814   28829 main.go:141] libmachine: (functional-910650) Calling .GetSSHPort
I0205 02:14:35.582974   28829 main.go:141] libmachine: (functional-910650) Calling .GetSSHKeyPath
I0205 02:14:35.583122   28829 main.go:141] libmachine: (functional-910650) Calling .GetSSHUsername
I0205 02:14:35.583250   28829 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/functional-910650/id_rsa Username:docker}
I0205 02:14:35.667112   28829 ssh_runner.go:195] Run: sudo crictl images --output json
I0205 02:14:35.723378   28829 main.go:141] libmachine: Making call to close driver server
I0205 02:14:35.723389   28829 main.go:141] libmachine: (functional-910650) Calling .Close
I0205 02:14:35.723653   28829 main.go:141] libmachine: Successfully made call to close driver server
I0205 02:14:35.723676   28829 main.go:141] libmachine: Making call to close connection to plugin binary
I0205 02:14:35.723687   28829 main.go:141] libmachine: (functional-910650) DBG | Closing plugin on server side
I0205 02:14:35.723694   28829 main.go:141] libmachine: Making call to close driver server
I0205 02:14:35.723705   28829 main.go:141] libmachine: (functional-910650) Calling .Close
I0205 02:14:35.723951   28829 main.go:141] libmachine: Successfully made call to close driver server
I0205 02:14:35.723987   28829 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (7.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-910650 ssh pgrep buildkitd: exit status 1 (206.951753ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 image build -t localhost/my-image:functional-910650 testdata/build --alsologtostderr
functional_test.go:332: (dbg) Done: out/minikube-linux-amd64 -p functional-910650 image build -t localhost/my-image:functional-910650 testdata/build --alsologtostderr: (6.768700057s)
functional_test.go:337: (dbg) Stdout: out/minikube-linux-amd64 -p functional-910650 image build -t localhost/my-image:functional-910650 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 3a6ad0b8c5a
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-910650
--> 188509cc312
Successfully tagged localhost/my-image:functional-910650
188509cc3124dce4033cbd9f20612e19caa4141cedd16ff05c822607c35986fe
functional_test.go:340: (dbg) Stderr: out/minikube-linux-amd64 -p functional-910650 image build -t localhost/my-image:functional-910650 testdata/build --alsologtostderr:
I0205 02:14:35.947313   28918 out.go:345] Setting OutFile to fd 1 ...
I0205 02:14:35.947494   28918 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0205 02:14:35.947505   28918 out.go:358] Setting ErrFile to fd 2...
I0205 02:14:35.947511   28918 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0205 02:14:35.947731   28918 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20363-12788/.minikube/bin
I0205 02:14:35.948353   28918 config.go:182] Loaded profile config "functional-910650": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0205 02:14:35.948893   28918 config.go:182] Loaded profile config "functional-910650": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0205 02:14:35.949294   28918 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0205 02:14:35.949366   28918 main.go:141] libmachine: Launching plugin server for driver kvm2
I0205 02:14:35.965178   28918 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35047
I0205 02:14:35.965727   28918 main.go:141] libmachine: () Calling .GetVersion
I0205 02:14:35.966284   28918 main.go:141] libmachine: Using API Version  1
I0205 02:14:35.966306   28918 main.go:141] libmachine: () Calling .SetConfigRaw
I0205 02:14:35.966706   28918 main.go:141] libmachine: () Calling .GetMachineName
I0205 02:14:35.966874   28918 main.go:141] libmachine: (functional-910650) Calling .GetState
I0205 02:14:35.968661   28918 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0205 02:14:35.968703   28918 main.go:141] libmachine: Launching plugin server for driver kvm2
I0205 02:14:35.983232   28918 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39093
I0205 02:14:35.983676   28918 main.go:141] libmachine: () Calling .GetVersion
I0205 02:14:35.984326   28918 main.go:141] libmachine: Using API Version  1
I0205 02:14:35.984359   28918 main.go:141] libmachine: () Calling .SetConfigRaw
I0205 02:14:35.984656   28918 main.go:141] libmachine: () Calling .GetMachineName
I0205 02:14:35.984846   28918 main.go:141] libmachine: (functional-910650) Calling .DriverName
I0205 02:14:35.985047   28918 ssh_runner.go:195] Run: systemctl --version
I0205 02:14:35.985073   28918 main.go:141] libmachine: (functional-910650) Calling .GetSSHHostname
I0205 02:14:35.988117   28918 main.go:141] libmachine: (functional-910650) DBG | domain functional-910650 has defined MAC address 52:54:00:d5:92:0a in network mk-functional-910650
I0205 02:14:35.988515   28918 main.go:141] libmachine: (functional-910650) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:92:0a", ip: ""} in network mk-functional-910650: {Iface:virbr1 ExpiryTime:2025-02-05 03:12:02 +0000 UTC Type:0 Mac:52:54:00:d5:92:0a Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:functional-910650 Clientid:01:52:54:00:d5:92:0a}
I0205 02:14:35.988546   28918 main.go:141] libmachine: (functional-910650) DBG | domain functional-910650 has defined IP address 192.168.39.25 and MAC address 52:54:00:d5:92:0a in network mk-functional-910650
I0205 02:14:35.988712   28918 main.go:141] libmachine: (functional-910650) Calling .GetSSHPort
I0205 02:14:35.988866   28918 main.go:141] libmachine: (functional-910650) Calling .GetSSHKeyPath
I0205 02:14:35.988999   28918 main.go:141] libmachine: (functional-910650) Calling .GetSSHUsername
I0205 02:14:35.989112   28918 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/functional-910650/id_rsa Username:docker}
I0205 02:14:36.105584   28918 build_images.go:161] Building image from path: /tmp/build.1131892679.tar
I0205 02:14:36.105647   28918 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0205 02:14:36.148000   28918 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1131892679.tar
I0205 02:14:36.157222   28918 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1131892679.tar: stat -c "%s %y" /var/lib/minikube/build/build.1131892679.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1131892679.tar': No such file or directory
I0205 02:14:36.157257   28918 ssh_runner.go:362] scp /tmp/build.1131892679.tar --> /var/lib/minikube/build/build.1131892679.tar (3072 bytes)
I0205 02:14:36.205446   28918 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1131892679
I0205 02:14:36.216602   28918 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1131892679 -xf /var/lib/minikube/build/build.1131892679.tar
I0205 02:14:36.253557   28918 crio.go:315] Building image: /var/lib/minikube/build/build.1131892679
I0205 02:14:36.253662   28918 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-910650 /var/lib/minikube/build/build.1131892679 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0205 02:14:42.637440   28918 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-910650 /var/lib/minikube/build/build.1131892679 --cgroup-manager=cgroupfs: (6.383746993s)
I0205 02:14:42.637514   28918 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1131892679
I0205 02:14:42.649226   28918 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1131892679.tar
I0205 02:14:42.660575   28918 build_images.go:217] Built localhost/my-image:functional-910650 from /tmp/build.1131892679.tar
I0205 02:14:42.660612   28918 build_images.go:133] succeeded building to: functional-910650
I0205 02:14:42.660619   28918 build_images.go:134] failed building to: 
I0205 02:14:42.660648   28918 main.go:141] libmachine: Making call to close driver server
I0205 02:14:42.660665   28918 main.go:141] libmachine: (functional-910650) Calling .Close
I0205 02:14:42.660928   28918 main.go:141] libmachine: Successfully made call to close driver server
I0205 02:14:42.660949   28918 main.go:141] libmachine: Making call to close connection to plugin binary
I0205 02:14:42.660954   28918 main.go:141] libmachine: (functional-910650) DBG | Closing plugin on server side
I0205 02:14:42.660959   28918 main.go:141] libmachine: Making call to close driver server
I0205 02:14:42.660970   28918 main.go:141] libmachine: (functional-910650) Calling .Close
I0205 02:14:42.661190   28918 main.go:141] libmachine: Successfully made call to close driver server
I0205 02:14:42.661201   28918 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (7.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:359: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.789936247s)
functional_test.go:364: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-910650
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 image load --daemon kicbase/echo-server:functional-910650 --alsologtostderr
functional_test.go:372: (dbg) Done: out/minikube-linux-amd64 -p functional-910650 image load --daemon kicbase/echo-server:functional-910650 --alsologtostderr: (1.135139731s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 image load --daemon kicbase/echo-server:functional-910650 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:257: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-910650
functional_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 image load --daemon kicbase/echo-server:functional-910650 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 image save kicbase/echo-server:functional-910650 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 image rm kicbase/echo-server:functional-910650 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: (dbg) Done: out/minikube-linux-amd64 -p functional-910650 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (3.015734748s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-910650
functional_test.go:441: (dbg) Run:  out/minikube-linux-amd64 -p functional-910650 image save --daemon kicbase/echo-server:functional-910650 --alsologtostderr
functional_test.go:449: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-910650
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-910650
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-910650
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-910650
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (196.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-174966 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0205 02:19:09.277217   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/functional-910650/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:19:09.283630   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/functional-910650/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:19:09.294981   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/functional-910650/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:19:09.316379   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/functional-910650/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:19:09.357807   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/functional-910650/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:19:09.439308   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/functional-910650/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:19:09.600809   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/functional-910650/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:19:09.922461   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/functional-910650/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:19:10.564478   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/functional-910650/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:19:11.845795   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/functional-910650/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:19:14.407782   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/functional-910650/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:19:19.529317   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/functional-910650/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:19:29.770872   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/functional-910650/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:19:50.253053   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/functional-910650/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:20:31.214733   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/functional-910650/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-174966 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m16.032633063s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (196.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174966 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174966 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-174966 -- rollout status deployment/busybox: (3.901699968s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174966 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174966 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174966 -- exec busybox-58667487b6-f2mg7 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174966 -- exec busybox-58667487b6-h6dj8 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174966 -- exec busybox-58667487b6-n26sb -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174966 -- exec busybox-58667487b6-f2mg7 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174966 -- exec busybox-58667487b6-h6dj8 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174966 -- exec busybox-58667487b6-n26sb -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174966 -- exec busybox-58667487b6-f2mg7 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174966 -- exec busybox-58667487b6-h6dj8 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174966 -- exec busybox-58667487b6-n26sb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174966 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174966 -- exec busybox-58667487b6-f2mg7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174966 -- exec busybox-58667487b6-f2mg7 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174966 -- exec busybox-58667487b6-h6dj8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174966 -- exec busybox-58667487b6-h6dj8 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174966 -- exec busybox-58667487b6-n26sb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174966 -- exec busybox-58667487b6-n26sb -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (52.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-174966 -v=7 --alsologtostderr
E0205 02:21:04.765295   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-174966 -v=7 --alsologtostderr: (51.775427468s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (52.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-174966 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 status --output json -v=7 --alsologtostderr
E0205 02:21:53.136058   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/functional-910650/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 cp testdata/cp-test.txt ha-174966:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 ssh -n ha-174966 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 cp ha-174966:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3464455049/001/cp-test_ha-174966.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 ssh -n ha-174966 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 cp ha-174966:/home/docker/cp-test.txt ha-174966-m02:/home/docker/cp-test_ha-174966_ha-174966-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 ssh -n ha-174966 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 ssh -n ha-174966-m02 "sudo cat /home/docker/cp-test_ha-174966_ha-174966-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 cp ha-174966:/home/docker/cp-test.txt ha-174966-m03:/home/docker/cp-test_ha-174966_ha-174966-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 ssh -n ha-174966 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 ssh -n ha-174966-m03 "sudo cat /home/docker/cp-test_ha-174966_ha-174966-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 cp ha-174966:/home/docker/cp-test.txt ha-174966-m04:/home/docker/cp-test_ha-174966_ha-174966-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 ssh -n ha-174966 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 ssh -n ha-174966-m04 "sudo cat /home/docker/cp-test_ha-174966_ha-174966-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 cp testdata/cp-test.txt ha-174966-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 ssh -n ha-174966-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 cp ha-174966-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3464455049/001/cp-test_ha-174966-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 ssh -n ha-174966-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 cp ha-174966-m02:/home/docker/cp-test.txt ha-174966:/home/docker/cp-test_ha-174966-m02_ha-174966.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 ssh -n ha-174966-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 ssh -n ha-174966 "sudo cat /home/docker/cp-test_ha-174966-m02_ha-174966.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 cp ha-174966-m02:/home/docker/cp-test.txt ha-174966-m03:/home/docker/cp-test_ha-174966-m02_ha-174966-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 ssh -n ha-174966-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 ssh -n ha-174966-m03 "sudo cat /home/docker/cp-test_ha-174966-m02_ha-174966-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 cp ha-174966-m02:/home/docker/cp-test.txt ha-174966-m04:/home/docker/cp-test_ha-174966-m02_ha-174966-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 ssh -n ha-174966-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 ssh -n ha-174966-m04 "sudo cat /home/docker/cp-test_ha-174966-m02_ha-174966-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 cp testdata/cp-test.txt ha-174966-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 ssh -n ha-174966-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 cp ha-174966-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3464455049/001/cp-test_ha-174966-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 ssh -n ha-174966-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 cp ha-174966-m03:/home/docker/cp-test.txt ha-174966:/home/docker/cp-test_ha-174966-m03_ha-174966.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 ssh -n ha-174966-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 ssh -n ha-174966 "sudo cat /home/docker/cp-test_ha-174966-m03_ha-174966.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 cp ha-174966-m03:/home/docker/cp-test.txt ha-174966-m02:/home/docker/cp-test_ha-174966-m03_ha-174966-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 ssh -n ha-174966-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 ssh -n ha-174966-m02 "sudo cat /home/docker/cp-test_ha-174966-m03_ha-174966-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 cp ha-174966-m03:/home/docker/cp-test.txt ha-174966-m04:/home/docker/cp-test_ha-174966-m03_ha-174966-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 ssh -n ha-174966-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 ssh -n ha-174966-m04 "sudo cat /home/docker/cp-test_ha-174966-m03_ha-174966-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 cp testdata/cp-test.txt ha-174966-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 ssh -n ha-174966-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 cp ha-174966-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3464455049/001/cp-test_ha-174966-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 ssh -n ha-174966-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 cp ha-174966-m04:/home/docker/cp-test.txt ha-174966:/home/docker/cp-test_ha-174966-m04_ha-174966.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 ssh -n ha-174966-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 ssh -n ha-174966 "sudo cat /home/docker/cp-test_ha-174966-m04_ha-174966.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 cp ha-174966-m04:/home/docker/cp-test.txt ha-174966-m02:/home/docker/cp-test_ha-174966-m04_ha-174966-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 ssh -n ha-174966-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 ssh -n ha-174966-m02 "sudo cat /home/docker/cp-test_ha-174966-m04_ha-174966-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 cp ha-174966-m04:/home/docker/cp-test.txt ha-174966-m03:/home/docker/cp-test_ha-174966-m04_ha-174966-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 ssh -n ha-174966-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 ssh -n ha-174966-m03 "sudo cat /home/docker/cp-test_ha-174966-m04_ha-174966-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (91.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-174966 node stop m02 -v=7 --alsologtostderr: (1m30.968710778s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-174966 status -v=7 --alsologtostderr: exit status 7 (618.258067ms)

                                                
                                                
-- stdout --
	ha-174966
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-174966-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-174966-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-174966-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0205 02:23:36.343424   34725 out.go:345] Setting OutFile to fd 1 ...
	I0205 02:23:36.343548   34725 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 02:23:36.343556   34725 out.go:358] Setting ErrFile to fd 2...
	I0205 02:23:36.343561   34725 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 02:23:36.343744   34725 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20363-12788/.minikube/bin
	I0205 02:23:36.343900   34725 out.go:352] Setting JSON to false
	I0205 02:23:36.343922   34725 mustload.go:65] Loading cluster: ha-174966
	I0205 02:23:36.344062   34725 notify.go:220] Checking for updates...
	I0205 02:23:36.344337   34725 config.go:182] Loaded profile config "ha-174966": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 02:23:36.344354   34725 status.go:174] checking status of ha-174966 ...
	I0205 02:23:36.344766   34725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:23:36.344804   34725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:23:36.359624   34725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37259
	I0205 02:23:36.360005   34725 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:23:36.360598   34725 main.go:141] libmachine: Using API Version  1
	I0205 02:23:36.360625   34725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:23:36.360907   34725 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:23:36.361098   34725 main.go:141] libmachine: (ha-174966) Calling .GetState
	I0205 02:23:36.362829   34725 status.go:371] ha-174966 host status = "Running" (err=<nil>)
	I0205 02:23:36.362846   34725 host.go:66] Checking if "ha-174966" exists ...
	I0205 02:23:36.363138   34725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:23:36.363171   34725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:23:36.378578   34725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36195
	I0205 02:23:36.378947   34725 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:23:36.379363   34725 main.go:141] libmachine: Using API Version  1
	I0205 02:23:36.379380   34725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:23:36.379663   34725 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:23:36.379839   34725 main.go:141] libmachine: (ha-174966) Calling .GetIP
	I0205 02:23:36.382619   34725 main.go:141] libmachine: (ha-174966) DBG | domain ha-174966 has defined MAC address 52:54:00:49:65:c6 in network mk-ha-174966
	I0205 02:23:36.383025   34725 main.go:141] libmachine: (ha-174966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:65:c6", ip: ""} in network mk-ha-174966: {Iface:virbr1 ExpiryTime:2025-02-05 03:17:49 +0000 UTC Type:0 Mac:52:54:00:49:65:c6 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-174966 Clientid:01:52:54:00:49:65:c6}
	I0205 02:23:36.383056   34725 main.go:141] libmachine: (ha-174966) DBG | domain ha-174966 has defined IP address 192.168.39.186 and MAC address 52:54:00:49:65:c6 in network mk-ha-174966
	I0205 02:23:36.383162   34725 host.go:66] Checking if "ha-174966" exists ...
	I0205 02:23:36.383486   34725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:23:36.383530   34725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:23:36.398868   34725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35891
	I0205 02:23:36.399192   34725 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:23:36.399688   34725 main.go:141] libmachine: Using API Version  1
	I0205 02:23:36.399710   34725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:23:36.400017   34725 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:23:36.400218   34725 main.go:141] libmachine: (ha-174966) Calling .DriverName
	I0205 02:23:36.400387   34725 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0205 02:23:36.400414   34725 main.go:141] libmachine: (ha-174966) Calling .GetSSHHostname
	I0205 02:23:36.403171   34725 main.go:141] libmachine: (ha-174966) DBG | domain ha-174966 has defined MAC address 52:54:00:49:65:c6 in network mk-ha-174966
	I0205 02:23:36.403664   34725 main.go:141] libmachine: (ha-174966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:65:c6", ip: ""} in network mk-ha-174966: {Iface:virbr1 ExpiryTime:2025-02-05 03:17:49 +0000 UTC Type:0 Mac:52:54:00:49:65:c6 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-174966 Clientid:01:52:54:00:49:65:c6}
	I0205 02:23:36.403692   34725 main.go:141] libmachine: (ha-174966) DBG | domain ha-174966 has defined IP address 192.168.39.186 and MAC address 52:54:00:49:65:c6 in network mk-ha-174966
	I0205 02:23:36.403821   34725 main.go:141] libmachine: (ha-174966) Calling .GetSSHPort
	I0205 02:23:36.403983   34725 main.go:141] libmachine: (ha-174966) Calling .GetSSHKeyPath
	I0205 02:23:36.404155   34725 main.go:141] libmachine: (ha-174966) Calling .GetSSHUsername
	I0205 02:23:36.404287   34725 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/ha-174966/id_rsa Username:docker}
	I0205 02:23:36.486034   34725 ssh_runner.go:195] Run: systemctl --version
	I0205 02:23:36.492826   34725 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0205 02:23:36.509666   34725 kubeconfig.go:125] found "ha-174966" server: "https://192.168.39.254:8443"
	I0205 02:23:36.509705   34725 api_server.go:166] Checking apiserver status ...
	I0205 02:23:36.509737   34725 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 02:23:36.526123   34725 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1132/cgroup
	W0205 02:23:36.536376   34725 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1132/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0205 02:23:36.536432   34725 ssh_runner.go:195] Run: ls
	I0205 02:23:36.541027   34725 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0205 02:23:36.545757   34725 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0205 02:23:36.545778   34725 status.go:463] ha-174966 apiserver status = Running (err=<nil>)
	I0205 02:23:36.545821   34725 status.go:176] ha-174966 status: &{Name:ha-174966 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0205 02:23:36.545841   34725 status.go:174] checking status of ha-174966-m02 ...
	I0205 02:23:36.546124   34725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:23:36.546159   34725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:23:36.561488   34725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34673
	I0205 02:23:36.561952   34725 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:23:36.562528   34725 main.go:141] libmachine: Using API Version  1
	I0205 02:23:36.562556   34725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:23:36.563006   34725 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:23:36.563241   34725 main.go:141] libmachine: (ha-174966-m02) Calling .GetState
	I0205 02:23:36.564861   34725 status.go:371] ha-174966-m02 host status = "Stopped" (err=<nil>)
	I0205 02:23:36.564877   34725 status.go:384] host is not running, skipping remaining checks
	I0205 02:23:36.564884   34725 status.go:176] ha-174966-m02 status: &{Name:ha-174966-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0205 02:23:36.564915   34725 status.go:174] checking status of ha-174966-m03 ...
	I0205 02:23:36.565228   34725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:23:36.565283   34725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:23:36.580165   34725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33995
	I0205 02:23:36.580529   34725 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:23:36.580994   34725 main.go:141] libmachine: Using API Version  1
	I0205 02:23:36.581015   34725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:23:36.581329   34725 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:23:36.581529   34725 main.go:141] libmachine: (ha-174966-m03) Calling .GetState
	I0205 02:23:36.582895   34725 status.go:371] ha-174966-m03 host status = "Running" (err=<nil>)
	I0205 02:23:36.582909   34725 host.go:66] Checking if "ha-174966-m03" exists ...
	I0205 02:23:36.583208   34725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:23:36.583242   34725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:23:36.598683   34725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42045
	I0205 02:23:36.599091   34725 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:23:36.599579   34725 main.go:141] libmachine: Using API Version  1
	I0205 02:23:36.599599   34725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:23:36.599917   34725 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:23:36.600112   34725 main.go:141] libmachine: (ha-174966-m03) Calling .GetIP
	I0205 02:23:36.602766   34725 main.go:141] libmachine: (ha-174966-m03) DBG | domain ha-174966-m03 has defined MAC address 52:54:00:d9:78:7d in network mk-ha-174966
	I0205 02:23:36.603239   34725 main.go:141] libmachine: (ha-174966-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:78:7d", ip: ""} in network mk-ha-174966: {Iface:virbr1 ExpiryTime:2025-02-05 03:19:48 +0000 UTC Type:0 Mac:52:54:00:d9:78:7d Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-174966-m03 Clientid:01:52:54:00:d9:78:7d}
	I0205 02:23:36.603267   34725 main.go:141] libmachine: (ha-174966-m03) DBG | domain ha-174966-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:d9:78:7d in network mk-ha-174966
	I0205 02:23:36.603413   34725 host.go:66] Checking if "ha-174966-m03" exists ...
	I0205 02:23:36.603761   34725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:23:36.603815   34725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:23:36.618971   34725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38987
	I0205 02:23:36.619365   34725 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:23:36.619800   34725 main.go:141] libmachine: Using API Version  1
	I0205 02:23:36.619823   34725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:23:36.620109   34725 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:23:36.620306   34725 main.go:141] libmachine: (ha-174966-m03) Calling .DriverName
	I0205 02:23:36.620480   34725 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0205 02:23:36.620502   34725 main.go:141] libmachine: (ha-174966-m03) Calling .GetSSHHostname
	I0205 02:23:36.623198   34725 main.go:141] libmachine: (ha-174966-m03) DBG | domain ha-174966-m03 has defined MAC address 52:54:00:d9:78:7d in network mk-ha-174966
	I0205 02:23:36.623626   34725 main.go:141] libmachine: (ha-174966-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:78:7d", ip: ""} in network mk-ha-174966: {Iface:virbr1 ExpiryTime:2025-02-05 03:19:48 +0000 UTC Type:0 Mac:52:54:00:d9:78:7d Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-174966-m03 Clientid:01:52:54:00:d9:78:7d}
	I0205 02:23:36.623661   34725 main.go:141] libmachine: (ha-174966-m03) DBG | domain ha-174966-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:d9:78:7d in network mk-ha-174966
	I0205 02:23:36.623827   34725 main.go:141] libmachine: (ha-174966-m03) Calling .GetSSHPort
	I0205 02:23:36.624029   34725 main.go:141] libmachine: (ha-174966-m03) Calling .GetSSHKeyPath
	I0205 02:23:36.624167   34725 main.go:141] libmachine: (ha-174966-m03) Calling .GetSSHUsername
	I0205 02:23:36.624319   34725 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/ha-174966-m03/id_rsa Username:docker}
	I0205 02:23:36.705752   34725 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0205 02:23:36.722612   34725 kubeconfig.go:125] found "ha-174966" server: "https://192.168.39.254:8443"
	I0205 02:23:36.722640   34725 api_server.go:166] Checking apiserver status ...
	I0205 02:23:36.722678   34725 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 02:23:36.737260   34725 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup
	W0205 02:23:36.746508   34725 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0205 02:23:36.746587   34725 ssh_runner.go:195] Run: ls
	I0205 02:23:36.751026   34725 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0205 02:23:36.755912   34725 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0205 02:23:36.755937   34725 status.go:463] ha-174966-m03 apiserver status = Running (err=<nil>)
	I0205 02:23:36.755944   34725 status.go:176] ha-174966-m03 status: &{Name:ha-174966-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0205 02:23:36.755959   34725 status.go:174] checking status of ha-174966-m04 ...
	I0205 02:23:36.756364   34725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:23:36.756411   34725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:23:36.771453   34725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36723
	I0205 02:23:36.771873   34725 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:23:36.772302   34725 main.go:141] libmachine: Using API Version  1
	I0205 02:23:36.772323   34725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:23:36.772622   34725 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:23:36.772835   34725 main.go:141] libmachine: (ha-174966-m04) Calling .GetState
	I0205 02:23:36.774369   34725 status.go:371] ha-174966-m04 host status = "Running" (err=<nil>)
	I0205 02:23:36.774387   34725 host.go:66] Checking if "ha-174966-m04" exists ...
	I0205 02:23:36.774706   34725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:23:36.774758   34725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:23:36.789988   34725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46371
	I0205 02:23:36.790375   34725 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:23:36.790830   34725 main.go:141] libmachine: Using API Version  1
	I0205 02:23:36.790851   34725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:23:36.791145   34725 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:23:36.791315   34725 main.go:141] libmachine: (ha-174966-m04) Calling .GetIP
	I0205 02:23:36.794096   34725 main.go:141] libmachine: (ha-174966-m04) DBG | domain ha-174966-m04 has defined MAC address 52:54:00:1f:e9:0f in network mk-ha-174966
	I0205 02:23:36.794488   34725 main.go:141] libmachine: (ha-174966-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:e9:0f", ip: ""} in network mk-ha-174966: {Iface:virbr1 ExpiryTime:2025-02-05 03:21:14 +0000 UTC Type:0 Mac:52:54:00:1f:e9:0f Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-174966-m04 Clientid:01:52:54:00:1f:e9:0f}
	I0205 02:23:36.794511   34725 main.go:141] libmachine: (ha-174966-m04) DBG | domain ha-174966-m04 has defined IP address 192.168.39.225 and MAC address 52:54:00:1f:e9:0f in network mk-ha-174966
	I0205 02:23:36.794631   34725 host.go:66] Checking if "ha-174966-m04" exists ...
	I0205 02:23:36.794926   34725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:23:36.794960   34725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:23:36.810180   34725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40427
	I0205 02:23:36.810553   34725 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:23:36.811016   34725 main.go:141] libmachine: Using API Version  1
	I0205 02:23:36.811036   34725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:23:36.811378   34725 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:23:36.811551   34725 main.go:141] libmachine: (ha-174966-m04) Calling .DriverName
	I0205 02:23:36.811726   34725 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0205 02:23:36.811745   34725 main.go:141] libmachine: (ha-174966-m04) Calling .GetSSHHostname
	I0205 02:23:36.814640   34725 main.go:141] libmachine: (ha-174966-m04) DBG | domain ha-174966-m04 has defined MAC address 52:54:00:1f:e9:0f in network mk-ha-174966
	I0205 02:23:36.815113   34725 main.go:141] libmachine: (ha-174966-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:e9:0f", ip: ""} in network mk-ha-174966: {Iface:virbr1 ExpiryTime:2025-02-05 03:21:14 +0000 UTC Type:0 Mac:52:54:00:1f:e9:0f Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-174966-m04 Clientid:01:52:54:00:1f:e9:0f}
	I0205 02:23:36.815146   34725 main.go:141] libmachine: (ha-174966-m04) DBG | domain ha-174966-m04 has defined IP address 192.168.39.225 and MAC address 52:54:00:1f:e9:0f in network mk-ha-174966
	I0205 02:23:36.815356   34725 main.go:141] libmachine: (ha-174966-m04) Calling .GetSSHPort
	I0205 02:23:36.815539   34725 main.go:141] libmachine: (ha-174966-m04) Calling .GetSSHKeyPath
	I0205 02:23:36.815673   34725 main.go:141] libmachine: (ha-174966-m04) Calling .GetSSHUsername
	I0205 02:23:36.815782   34725 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/ha-174966-m04/id_rsa Username:docker}
	I0205 02:23:36.897795   34725 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0205 02:23:36.914623   34725 status.go:176] ha-174966-m04 status: &{Name:ha-174966-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (91.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (53.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 node start m02 -v=7 --alsologtostderr
E0205 02:24:09.273613   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/functional-910650/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-174966 node start m02 -v=7 --alsologtostderr: (52.593438572s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (53.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (427.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-174966 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-174966 -v=7 --alsologtostderr
E0205 02:24:36.978228   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/functional-910650/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:26:04.765259   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:27:27.836728   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-174966 -v=7 --alsologtostderr: (4m33.987888944s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-174966 --wait=true -v=7 --alsologtostderr
E0205 02:29:09.273799   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/functional-910650/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:31:04.765184   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-174966 --wait=true -v=7 --alsologtostderr: (2m33.326807293s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-174966
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (427.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-174966 node delete m03 -v=7 --alsologtostderr: (17.164198879s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (272.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 stop -v=7 --alsologtostderr
E0205 02:34:09.273528   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/functional-910650/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:35:32.339834   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/functional-910650/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:36:04.764597   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-174966 stop -v=7 --alsologtostderr: (4m32.373068628s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-174966 status -v=7 --alsologtostderr: exit status 7 (99.632229ms)

                                                
                                                
-- stdout --
	ha-174966
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-174966-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-174966-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0205 02:36:30.247158   38893 out.go:345] Setting OutFile to fd 1 ...
	I0205 02:36:30.247287   38893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 02:36:30.247300   38893 out.go:358] Setting ErrFile to fd 2...
	I0205 02:36:30.247308   38893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 02:36:30.247502   38893 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20363-12788/.minikube/bin
	I0205 02:36:30.247690   38893 out.go:352] Setting JSON to false
	I0205 02:36:30.247721   38893 mustload.go:65] Loading cluster: ha-174966
	I0205 02:36:30.247826   38893 notify.go:220] Checking for updates...
	I0205 02:36:30.248130   38893 config.go:182] Loaded profile config "ha-174966": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 02:36:30.248149   38893 status.go:174] checking status of ha-174966 ...
	I0205 02:36:30.248570   38893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:36:30.248616   38893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:36:30.263104   38893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40227
	I0205 02:36:30.263544   38893 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:36:30.264152   38893 main.go:141] libmachine: Using API Version  1
	I0205 02:36:30.264180   38893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:36:30.264468   38893 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:36:30.264702   38893 main.go:141] libmachine: (ha-174966) Calling .GetState
	I0205 02:36:30.266479   38893 status.go:371] ha-174966 host status = "Stopped" (err=<nil>)
	I0205 02:36:30.266490   38893 status.go:384] host is not running, skipping remaining checks
	I0205 02:36:30.266495   38893 status.go:176] ha-174966 status: &{Name:ha-174966 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0205 02:36:30.266526   38893 status.go:174] checking status of ha-174966-m02 ...
	I0205 02:36:30.266785   38893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:36:30.266838   38893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:36:30.281222   38893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35645
	I0205 02:36:30.281595   38893 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:36:30.282010   38893 main.go:141] libmachine: Using API Version  1
	I0205 02:36:30.282036   38893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:36:30.282365   38893 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:36:30.282560   38893 main.go:141] libmachine: (ha-174966-m02) Calling .GetState
	I0205 02:36:30.284035   38893 status.go:371] ha-174966-m02 host status = "Stopped" (err=<nil>)
	I0205 02:36:30.284048   38893 status.go:384] host is not running, skipping remaining checks
	I0205 02:36:30.284053   38893 status.go:176] ha-174966-m02 status: &{Name:ha-174966-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0205 02:36:30.284068   38893 status.go:174] checking status of ha-174966-m04 ...
	I0205 02:36:30.284339   38893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:36:30.284378   38893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:36:30.299521   38893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34921
	I0205 02:36:30.299894   38893 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:36:30.300298   38893 main.go:141] libmachine: Using API Version  1
	I0205 02:36:30.300318   38893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:36:30.300610   38893 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:36:30.300801   38893 main.go:141] libmachine: (ha-174966-m04) Calling .GetState
	I0205 02:36:30.302229   38893 status.go:371] ha-174966-m04 host status = "Stopped" (err=<nil>)
	I0205 02:36:30.302245   38893 status.go:384] host is not running, skipping remaining checks
	I0205 02:36:30.302252   38893 status.go:176] ha-174966-m04 status: &{Name:ha-174966-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (272.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (121.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-174966 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-174966 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m0.363200726s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (121.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (74.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-174966 --control-plane -v=7 --alsologtostderr
E0205 02:39:09.273694   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/functional-910650/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-174966 --control-plane -v=7 --alsologtostderr: (1m13.412874363s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-174966 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (74.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                    
x
+
TestJSONOutput/start/Command (88.3s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-575454 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0205 02:41:04.769428   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-575454 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m28.297651391s)
--- PASS: TestJSONOutput/start/Command (88.30s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-575454 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.57s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-575454 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.57s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.36s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-575454 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-575454 --output=json --user=testUser: (7.357178682s)
--- PASS: TestJSONOutput/stop/Command (7.36s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-566256 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-566256 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (62.612315ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"092c9f6e-b7d0-4851-a014-5fb289920ba1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-566256] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"56978ef1-0f79-40a1-9226-537494480d0b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20363"}}
	{"specversion":"1.0","id":"4ae29c04-3c4e-46e7-8021-29d55c826ef5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"119fa461-46d7-4715-8d1a-b43fcb6db5c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20363-12788/kubeconfig"}}
	{"specversion":"1.0","id":"f5fc4f6a-0715-42e5-84ca-8cd741d12dfb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20363-12788/.minikube"}}
	{"specversion":"1.0","id":"68280ae7-e3a4-4e4b-9f72-7ecff0d9957b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"358265bd-8df3-4251-afd1-d5b0dc23645f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4b3ff965-46ee-47f5-a8e2-ed69e1723a0f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-566256" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-566256
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (83.62s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-004965 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-004965 --driver=kvm2  --container-runtime=crio: (41.761223381s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-016187 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-016187 --driver=kvm2  --container-runtime=crio: (39.241578223s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-004965
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-016187
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-016187" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-016187
helpers_test.go:175: Cleaning up "first-004965" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-004965
--- PASS: TestMinikubeProfile (83.62s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.37s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-322794 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-322794 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.373724223s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.37s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-322794 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-322794 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.01s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-337434 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-337434 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.006657529s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.01s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-337434 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-337434 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-322794 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-337434 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-337434 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-337434
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-337434: (1.265466629s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.13s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-337434
E0205 02:44:07.838674   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:44:09.273466   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/functional-910650/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-337434: (22.130983103s)
--- PASS: TestMountStart/serial/RestartStopped (23.13s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-337434 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-337434 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (114.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-794103 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0205 02:46:04.764887   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-794103 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m54.189155926s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-794103 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (114.60s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-794103 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-794103 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-794103 -- rollout status deployment/busybox: (3.554138678s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-794103 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-794103 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-794103 -- exec busybox-58667487b6-f5dnw -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-794103 -- exec busybox-58667487b6-tvq95 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-794103 -- exec busybox-58667487b6-f5dnw -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-794103 -- exec busybox-58667487b6-tvq95 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-794103 -- exec busybox-58667487b6-f5dnw -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-794103 -- exec busybox-58667487b6-tvq95 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.02s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-794103 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-794103 -- exec busybox-58667487b6-f5dnw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-794103 -- exec busybox-58667487b6-f5dnw -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-794103 -- exec busybox-58667487b6-tvq95 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-794103 -- exec busybox-58667487b6-tvq95 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.75s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (51.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-794103 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-794103 -v 3 --alsologtostderr: (50.739416387s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-794103 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (51.28s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-794103 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.57s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-794103 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-794103 cp testdata/cp-test.txt multinode-794103:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-794103 ssh -n multinode-794103 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-794103 cp multinode-794103:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile47059525/001/cp-test_multinode-794103.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-794103 ssh -n multinode-794103 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-794103 cp multinode-794103:/home/docker/cp-test.txt multinode-794103-m02:/home/docker/cp-test_multinode-794103_multinode-794103-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-794103 ssh -n multinode-794103 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-794103 ssh -n multinode-794103-m02 "sudo cat /home/docker/cp-test_multinode-794103_multinode-794103-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-794103 cp multinode-794103:/home/docker/cp-test.txt multinode-794103-m03:/home/docker/cp-test_multinode-794103_multinode-794103-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-794103 ssh -n multinode-794103 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-794103 ssh -n multinode-794103-m03 "sudo cat /home/docker/cp-test_multinode-794103_multinode-794103-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-794103 cp testdata/cp-test.txt multinode-794103-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-794103 ssh -n multinode-794103-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-794103 cp multinode-794103-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile47059525/001/cp-test_multinode-794103-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-794103 ssh -n multinode-794103-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-794103 cp multinode-794103-m02:/home/docker/cp-test.txt multinode-794103:/home/docker/cp-test_multinode-794103-m02_multinode-794103.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-794103 ssh -n multinode-794103-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-794103 ssh -n multinode-794103 "sudo cat /home/docker/cp-test_multinode-794103-m02_multinode-794103.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-794103 cp multinode-794103-m02:/home/docker/cp-test.txt multinode-794103-m03:/home/docker/cp-test_multinode-794103-m02_multinode-794103-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-794103 ssh -n multinode-794103-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-794103 ssh -n multinode-794103-m03 "sudo cat /home/docker/cp-test_multinode-794103-m02_multinode-794103-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-794103 cp testdata/cp-test.txt multinode-794103-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-794103 ssh -n multinode-794103-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-794103 cp multinode-794103-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile47059525/001/cp-test_multinode-794103-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-794103 ssh -n multinode-794103-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-794103 cp multinode-794103-m03:/home/docker/cp-test.txt multinode-794103:/home/docker/cp-test_multinode-794103-m03_multinode-794103.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-794103 ssh -n multinode-794103-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-794103 ssh -n multinode-794103 "sudo cat /home/docker/cp-test_multinode-794103-m03_multinode-794103.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-794103 cp multinode-794103-m03:/home/docker/cp-test.txt multinode-794103-m02:/home/docker/cp-test_multinode-794103-m03_multinode-794103-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-794103 ssh -n multinode-794103-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-794103 ssh -n multinode-794103-m02 "sudo cat /home/docker/cp-test_multinode-794103-m03_multinode-794103-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.98s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-794103 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-794103 node stop m03: (1.457781638s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-794103 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-794103 status: exit status 7 (414.3965ms)

                                                
                                                
-- stdout --
	multinode-794103
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-794103-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-794103-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-794103 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-794103 status --alsologtostderr: exit status 7 (423.187323ms)

                                                
                                                
-- stdout --
	multinode-794103
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-794103-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-794103-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0205 02:47:16.604404   46713 out.go:345] Setting OutFile to fd 1 ...
	I0205 02:47:16.604521   46713 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 02:47:16.604529   46713 out.go:358] Setting ErrFile to fd 2...
	I0205 02:47:16.604533   46713 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 02:47:16.604702   46713 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20363-12788/.minikube/bin
	I0205 02:47:16.604860   46713 out.go:352] Setting JSON to false
	I0205 02:47:16.604885   46713 mustload.go:65] Loading cluster: multinode-794103
	I0205 02:47:16.604941   46713 notify.go:220] Checking for updates...
	I0205 02:47:16.605289   46713 config.go:182] Loaded profile config "multinode-794103": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 02:47:16.605305   46713 status.go:174] checking status of multinode-794103 ...
	I0205 02:47:16.605864   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:47:16.605912   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:47:16.622785   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42533
	I0205 02:47:16.623262   46713 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:47:16.623877   46713 main.go:141] libmachine: Using API Version  1
	I0205 02:47:16.623906   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:47:16.624314   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:47:16.624544   46713 main.go:141] libmachine: (multinode-794103) Calling .GetState
	I0205 02:47:16.626356   46713 status.go:371] multinode-794103 host status = "Running" (err=<nil>)
	I0205 02:47:16.626374   46713 host.go:66] Checking if "multinode-794103" exists ...
	I0205 02:47:16.626802   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:47:16.626849   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:47:16.643705   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40855
	I0205 02:47:16.644085   46713 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:47:16.644520   46713 main.go:141] libmachine: Using API Version  1
	I0205 02:47:16.644539   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:47:16.644882   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:47:16.645059   46713 main.go:141] libmachine: (multinode-794103) Calling .GetIP
	I0205 02:47:16.648237   46713 main.go:141] libmachine: (multinode-794103) DBG | domain multinode-794103 has defined MAC address 52:54:00:53:e2:25 in network mk-multinode-794103
	I0205 02:47:16.648644   46713 main.go:141] libmachine: (multinode-794103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e2:25", ip: ""} in network mk-multinode-794103: {Iface:virbr1 ExpiryTime:2025-02-05 03:44:29 +0000 UTC Type:0 Mac:52:54:00:53:e2:25 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:multinode-794103 Clientid:01:52:54:00:53:e2:25}
	I0205 02:47:16.648681   46713 main.go:141] libmachine: (multinode-794103) DBG | domain multinode-794103 has defined IP address 192.168.39.198 and MAC address 52:54:00:53:e2:25 in network mk-multinode-794103
	I0205 02:47:16.648778   46713 host.go:66] Checking if "multinode-794103" exists ...
	I0205 02:47:16.649103   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:47:16.649150   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:47:16.664029   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37757
	I0205 02:47:16.664502   46713 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:47:16.665055   46713 main.go:141] libmachine: Using API Version  1
	I0205 02:47:16.665077   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:47:16.665445   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:47:16.665636   46713 main.go:141] libmachine: (multinode-794103) Calling .DriverName
	I0205 02:47:16.665810   46713 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0205 02:47:16.665837   46713 main.go:141] libmachine: (multinode-794103) Calling .GetSSHHostname
	I0205 02:47:16.668730   46713 main.go:141] libmachine: (multinode-794103) DBG | domain multinode-794103 has defined MAC address 52:54:00:53:e2:25 in network mk-multinode-794103
	I0205 02:47:16.669148   46713 main.go:141] libmachine: (multinode-794103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e2:25", ip: ""} in network mk-multinode-794103: {Iface:virbr1 ExpiryTime:2025-02-05 03:44:29 +0000 UTC Type:0 Mac:52:54:00:53:e2:25 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:multinode-794103 Clientid:01:52:54:00:53:e2:25}
	I0205 02:47:16.669180   46713 main.go:141] libmachine: (multinode-794103) DBG | domain multinode-794103 has defined IP address 192.168.39.198 and MAC address 52:54:00:53:e2:25 in network mk-multinode-794103
	I0205 02:47:16.669322   46713 main.go:141] libmachine: (multinode-794103) Calling .GetSSHPort
	I0205 02:47:16.669490   46713 main.go:141] libmachine: (multinode-794103) Calling .GetSSHKeyPath
	I0205 02:47:16.669633   46713 main.go:141] libmachine: (multinode-794103) Calling .GetSSHUsername
	I0205 02:47:16.669775   46713 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/multinode-794103/id_rsa Username:docker}
	I0205 02:47:16.748655   46713 ssh_runner.go:195] Run: systemctl --version
	I0205 02:47:16.755800   46713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0205 02:47:16.772546   46713 kubeconfig.go:125] found "multinode-794103" server: "https://192.168.39.198:8443"
	I0205 02:47:16.772599   46713 api_server.go:166] Checking apiserver status ...
	I0205 02:47:16.772644   46713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 02:47:16.788007   46713 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1113/cgroup
	W0205 02:47:16.801005   46713 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1113/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0205 02:47:16.801071   46713 ssh_runner.go:195] Run: ls
	I0205 02:47:16.805513   46713 api_server.go:253] Checking apiserver healthz at https://192.168.39.198:8443/healthz ...
	I0205 02:47:16.810133   46713 api_server.go:279] https://192.168.39.198:8443/healthz returned 200:
	ok
	I0205 02:47:16.810157   46713 status.go:463] multinode-794103 apiserver status = Running (err=<nil>)
	I0205 02:47:16.810167   46713 status.go:176] multinode-794103 status: &{Name:multinode-794103 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0205 02:47:16.810193   46713 status.go:174] checking status of multinode-794103-m02 ...
	I0205 02:47:16.810547   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:47:16.810592   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:47:16.826412   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43431
	I0205 02:47:16.826767   46713 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:47:16.827262   46713 main.go:141] libmachine: Using API Version  1
	I0205 02:47:16.827279   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:47:16.827567   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:47:16.827753   46713 main.go:141] libmachine: (multinode-794103-m02) Calling .GetState
	I0205 02:47:16.829266   46713 status.go:371] multinode-794103-m02 host status = "Running" (err=<nil>)
	I0205 02:47:16.829278   46713 host.go:66] Checking if "multinode-794103-m02" exists ...
	I0205 02:47:16.829650   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:47:16.829694   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:47:16.844898   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34855
	I0205 02:47:16.845428   46713 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:47:16.845938   46713 main.go:141] libmachine: Using API Version  1
	I0205 02:47:16.845959   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:47:16.846286   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:47:16.846492   46713 main.go:141] libmachine: (multinode-794103-m02) Calling .GetIP
	I0205 02:47:16.848787   46713 main.go:141] libmachine: (multinode-794103-m02) DBG | domain multinode-794103-m02 has defined MAC address 52:54:00:77:5f:8d in network mk-multinode-794103
	I0205 02:47:16.849110   46713 main.go:141] libmachine: (multinode-794103-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:5f:8d", ip: ""} in network mk-multinode-794103: {Iface:virbr1 ExpiryTime:2025-02-05 03:45:35 +0000 UTC Type:0 Mac:52:54:00:77:5f:8d Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:multinode-794103-m02 Clientid:01:52:54:00:77:5f:8d}
	I0205 02:47:16.849141   46713 main.go:141] libmachine: (multinode-794103-m02) DBG | domain multinode-794103-m02 has defined IP address 192.168.39.33 and MAC address 52:54:00:77:5f:8d in network mk-multinode-794103
	I0205 02:47:16.849303   46713 host.go:66] Checking if "multinode-794103-m02" exists ...
	I0205 02:47:16.849628   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:47:16.849665   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:47:16.864560   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43217
	I0205 02:47:16.864940   46713 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:47:16.865393   46713 main.go:141] libmachine: Using API Version  1
	I0205 02:47:16.865425   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:47:16.865715   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:47:16.865896   46713 main.go:141] libmachine: (multinode-794103-m02) Calling .DriverName
	I0205 02:47:16.866063   46713 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0205 02:47:16.866080   46713 main.go:141] libmachine: (multinode-794103-m02) Calling .GetSSHHostname
	I0205 02:47:16.868558   46713 main.go:141] libmachine: (multinode-794103-m02) DBG | domain multinode-794103-m02 has defined MAC address 52:54:00:77:5f:8d in network mk-multinode-794103
	I0205 02:47:16.868948   46713 main.go:141] libmachine: (multinode-794103-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:5f:8d", ip: ""} in network mk-multinode-794103: {Iface:virbr1 ExpiryTime:2025-02-05 03:45:35 +0000 UTC Type:0 Mac:52:54:00:77:5f:8d Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:multinode-794103-m02 Clientid:01:52:54:00:77:5f:8d}
	I0205 02:47:16.868987   46713 main.go:141] libmachine: (multinode-794103-m02) DBG | domain multinode-794103-m02 has defined IP address 192.168.39.33 and MAC address 52:54:00:77:5f:8d in network mk-multinode-794103
	I0205 02:47:16.869111   46713 main.go:141] libmachine: (multinode-794103-m02) Calling .GetSSHPort
	I0205 02:47:16.869279   46713 main.go:141] libmachine: (multinode-794103-m02) Calling .GetSSHKeyPath
	I0205 02:47:16.869401   46713 main.go:141] libmachine: (multinode-794103-m02) Calling .GetSSHUsername
	I0205 02:47:16.869513   46713 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20363-12788/.minikube/machines/multinode-794103-m02/id_rsa Username:docker}
	I0205 02:47:16.948081   46713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0205 02:47:16.961549   46713 status.go:176] multinode-794103-m02 status: &{Name:multinode-794103-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0205 02:47:16.961588   46713 status.go:174] checking status of multinode-794103-m03 ...
	I0205 02:47:16.961972   46713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:47:16.962030   46713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:47:16.976973   46713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44669
	I0205 02:47:16.977466   46713 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:47:16.977964   46713 main.go:141] libmachine: Using API Version  1
	I0205 02:47:16.977997   46713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:47:16.978310   46713 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:47:16.978481   46713 main.go:141] libmachine: (multinode-794103-m03) Calling .GetState
	I0205 02:47:16.979984   46713 status.go:371] multinode-794103-m03 host status = "Stopped" (err=<nil>)
	I0205 02:47:16.980000   46713 status.go:384] host is not running, skipping remaining checks
	I0205 02:47:16.980007   46713 status.go:176] multinode-794103-m03 status: &{Name:multinode-794103-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.30s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (37.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-794103 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-794103 node start m03 -v=7 --alsologtostderr: (37.239145469s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-794103 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (37.85s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (337.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-794103
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-794103
E0205 02:49:09.276859   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/functional-910650/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-794103: (3m3.031609023s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-794103 --wait=true -v=8 --alsologtostderr
E0205 02:51:04.764365   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:52:12.341204   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/functional-910650/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-794103 --wait=true -v=8 --alsologtostderr: (2m34.850145885s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-794103
--- PASS: TestMultiNode/serial/RestartKeepsNodes (337.98s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-794103 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-794103 node delete m03: (2.177969271s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-794103 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.69s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (182.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-794103 stop
E0205 02:54:09.274372   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/functional-910650/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:56:04.764560   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-794103 stop: (3m1.843485776s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-794103 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-794103 status: exit status 7 (81.882103ms)

                                                
                                                
-- stdout --
	multinode-794103
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-794103-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-794103 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-794103 status --alsologtostderr: exit status 7 (82.336499ms)

                                                
                                                
-- stdout --
	multinode-794103
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-794103-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0205 02:56:37.456530   49701 out.go:345] Setting OutFile to fd 1 ...
	I0205 02:56:37.456961   49701 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 02:56:37.456973   49701 out.go:358] Setting ErrFile to fd 2...
	I0205 02:56:37.456980   49701 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 02:56:37.457460   49701 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20363-12788/.minikube/bin
	I0205 02:56:37.457826   49701 out.go:352] Setting JSON to false
	I0205 02:56:37.457900   49701 mustload.go:65] Loading cluster: multinode-794103
	I0205 02:56:37.458068   49701 notify.go:220] Checking for updates...
	I0205 02:56:37.458774   49701 config.go:182] Loaded profile config "multinode-794103": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 02:56:37.458806   49701 status.go:174] checking status of multinode-794103 ...
	I0205 02:56:37.459301   49701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:56:37.459352   49701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:56:37.474687   49701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35493
	I0205 02:56:37.475097   49701 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:56:37.475724   49701 main.go:141] libmachine: Using API Version  1
	I0205 02:56:37.475745   49701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:56:37.476101   49701 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:56:37.476341   49701 main.go:141] libmachine: (multinode-794103) Calling .GetState
	I0205 02:56:37.477882   49701 status.go:371] multinode-794103 host status = "Stopped" (err=<nil>)
	I0205 02:56:37.477896   49701 status.go:384] host is not running, skipping remaining checks
	I0205 02:56:37.477903   49701 status.go:176] multinode-794103 status: &{Name:multinode-794103 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0205 02:56:37.477952   49701 status.go:174] checking status of multinode-794103-m02 ...
	I0205 02:56:37.478231   49701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0205 02:56:37.478283   49701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0205 02:56:37.492398   49701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35537
	I0205 02:56:37.492797   49701 main.go:141] libmachine: () Calling .GetVersion
	I0205 02:56:37.493202   49701 main.go:141] libmachine: Using API Version  1
	I0205 02:56:37.493223   49701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0205 02:56:37.493536   49701 main.go:141] libmachine: () Calling .GetMachineName
	I0205 02:56:37.493732   49701 main.go:141] libmachine: (multinode-794103-m02) Calling .GetState
	I0205 02:56:37.495069   49701 status.go:371] multinode-794103-m02 host status = "Stopped" (err=<nil>)
	I0205 02:56:37.495080   49701 status.go:384] host is not running, skipping remaining checks
	I0205 02:56:37.495085   49701 status.go:176] multinode-794103-m02 status: &{Name:multinode-794103-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (182.01s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (115.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-794103 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-794103 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m54.856407621s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-794103 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (115.37s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (45.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-794103
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-794103-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-794103-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (61.361627ms)

                                                
                                                
-- stdout --
	* [multinode-794103-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20363
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20363-12788/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20363-12788/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-794103-m02' is duplicated with machine name 'multinode-794103-m02' in profile 'multinode-794103'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-794103-m03 --driver=kvm2  --container-runtime=crio
E0205 02:59:09.276425   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/functional-910650/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-794103-m03 --driver=kvm2  --container-runtime=crio: (43.953899841s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-794103
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-794103: exit status 80 (214.349088ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-794103 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-794103-m03 already exists in multinode-794103-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-794103-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (45.06s)

                                                
                                    
x
+
TestScheduledStopUnix (113.26s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-959397 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-959397 --memory=2048 --driver=kvm2  --container-runtime=crio: (41.668964972s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-959397 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-959397 -n scheduled-stop-959397
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-959397 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0205 03:04:51.527580   19989 retry.go:31] will retry after 129.631µs: open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/scheduled-stop-959397/pid: no such file or directory
I0205 03:04:51.528745   19989 retry.go:31] will retry after 169.513µs: open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/scheduled-stop-959397/pid: no such file or directory
I0205 03:04:51.529895   19989 retry.go:31] will retry after 158.736µs: open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/scheduled-stop-959397/pid: no such file or directory
I0205 03:04:51.531040   19989 retry.go:31] will retry after 427.657µs: open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/scheduled-stop-959397/pid: no such file or directory
I0205 03:04:51.532157   19989 retry.go:31] will retry after 652.668µs: open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/scheduled-stop-959397/pid: no such file or directory
I0205 03:04:51.533298   19989 retry.go:31] will retry after 708.438µs: open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/scheduled-stop-959397/pid: no such file or directory
I0205 03:04:51.534423   19989 retry.go:31] will retry after 1.137147ms: open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/scheduled-stop-959397/pid: no such file or directory
I0205 03:04:51.536644   19989 retry.go:31] will retry after 993.941µs: open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/scheduled-stop-959397/pid: no such file or directory
I0205 03:04:51.537779   19989 retry.go:31] will retry after 2.785983ms: open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/scheduled-stop-959397/pid: no such file or directory
I0205 03:04:51.541008   19989 retry.go:31] will retry after 3.763766ms: open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/scheduled-stop-959397/pid: no such file or directory
I0205 03:04:51.545221   19989 retry.go:31] will retry after 3.75784ms: open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/scheduled-stop-959397/pid: no such file or directory
I0205 03:04:51.549402   19989 retry.go:31] will retry after 11.714668ms: open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/scheduled-stop-959397/pid: no such file or directory
I0205 03:04:51.561639   19989 retry.go:31] will retry after 7.727949ms: open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/scheduled-stop-959397/pid: no such file or directory
I0205 03:04:51.569899   19989 retry.go:31] will retry after 27.161729ms: open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/scheduled-stop-959397/pid: no such file or directory
I0205 03:04:51.598152   19989 retry.go:31] will retry after 43.067541ms: open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/scheduled-stop-959397/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-959397 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-959397 -n scheduled-stop-959397
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-959397
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-959397 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-959397
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-959397: exit status 7 (64.310095ms)

                                                
                                                
-- stdout --
	scheduled-stop-959397
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-959397 -n scheduled-stop-959397
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-959397 -n scheduled-stop-959397: exit status 7 (63.102684ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-959397" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-959397
--- PASS: TestScheduledStopUnix (113.26s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (186.56s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3576737755 start -p running-upgrade-292727 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0205 03:06:04.765206   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3576737755 start -p running-upgrade-292727 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m33.863399386s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-292727 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-292727 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m31.212032896s)
helpers_test.go:175: Cleaning up "running-upgrade-292727" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-292727
E0205 03:09:09.274007   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/functional-910650/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-292727: (1.00298086s)
--- PASS: TestRunningBinaryUpgrade (186.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-290619 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-290619 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (77.867748ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-290619] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20363
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20363-12788/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20363-12788/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (65.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-290619 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-290619 --driver=kvm2  --container-runtime=crio: (1m5.673569083s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-290619 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (65.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-253147 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-253147 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (106.229171ms)

                                                
                                                
-- stdout --
	* [false-253147] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20363
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20363-12788/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20363-12788/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0205 03:06:06.227977   54515 out.go:345] Setting OutFile to fd 1 ...
	I0205 03:06:06.228111   54515 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 03:06:06.228121   54515 out.go:358] Setting ErrFile to fd 2...
	I0205 03:06:06.228125   54515 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 03:06:06.228300   54515 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20363-12788/.minikube/bin
	I0205 03:06:06.228903   54515 out.go:352] Setting JSON to false
	I0205 03:06:06.229836   54515 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6517,"bootTime":1738718249,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0205 03:06:06.229908   54515 start.go:139] virtualization: kvm guest
	I0205 03:06:06.232008   54515 out.go:177] * [false-253147] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0205 03:06:06.233530   54515 out.go:177]   - MINIKUBE_LOCATION=20363
	I0205 03:06:06.233538   54515 notify.go:220] Checking for updates...
	I0205 03:06:06.235032   54515 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0205 03:06:06.236463   54515 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20363-12788/kubeconfig
	I0205 03:06:06.237886   54515 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20363-12788/.minikube
	I0205 03:06:06.239221   54515 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0205 03:06:06.240461   54515 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0205 03:06:06.242040   54515 config.go:182] Loaded profile config "NoKubernetes-290619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 03:06:06.242178   54515 config.go:182] Loaded profile config "offline-crio-269713": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 03:06:06.242291   54515 driver.go:394] Setting default libvirt URI to qemu:///system
	I0205 03:06:06.280263   54515 out.go:177] * Using the kvm2 driver based on user configuration
	I0205 03:06:06.281686   54515 start.go:297] selected driver: kvm2
	I0205 03:06:06.281708   54515 start.go:901] validating driver "kvm2" against <nil>
	I0205 03:06:06.281727   54515 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0205 03:06:06.284318   54515 out.go:201] 
	W0205 03:06:06.285624   54515 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0205 03:06:06.286987   54515 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-253147 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-253147

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-253147

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-253147

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-253147

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-253147

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-253147

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-253147

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-253147

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-253147

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-253147

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253147"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253147"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253147"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-253147

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253147"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253147"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-253147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-253147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-253147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-253147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-253147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-253147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-253147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-253147" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253147"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253147"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253147"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253147"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253147"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-253147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-253147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-253147" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253147"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253147"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253147"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253147"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253147"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-253147

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253147"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253147"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253147"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253147"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253147"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253147"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253147"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253147"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253147"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253147"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253147"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253147"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253147"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253147"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253147"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253147"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253147"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-253147"

                                                
                                                
----------------------- debugLogs end: false-253147 [took: 3.091062178s] --------------------------------
helpers_test.go:175: Cleaning up "false-253147" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-253147
--- PASS: TestNetworkPlugins/group/false (3.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (37.72s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-290619 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-290619 --no-kubernetes --driver=kvm2  --container-runtime=crio: (36.377336328s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-290619 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-290619 status -o json: exit status 2 (258.482628ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-290619","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-290619
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-290619: (1.081719527s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (37.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (47.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-290619 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-290619 --no-kubernetes --driver=kvm2  --container-runtime=crio: (47.765753496s)
--- PASS: TestNoKubernetes/serial/Start (47.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-290619 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-290619 "sudo systemctl is-active --quiet service kubelet": exit status 1 (197.940752ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (17.191664658s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
E0205 03:08:52.343116   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/functional-910650/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (14.786800965s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-290619
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-290619: (1.433715996s)
--- PASS: TestNoKubernetes/serial/Stop (1.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (36.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-290619 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-290619 --driver=kvm2  --container-runtime=crio: (36.181220183s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (36.18s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.41s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.41s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (116.34s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1894006800 start -p stopped-upgrade-687224 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1894006800 start -p stopped-upgrade-687224 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m14.412760287s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1894006800 -p stopped-upgrade-687224 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1894006800 -p stopped-upgrade-687224 stop: (2.152075668s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-687224 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0205 03:11:04.764352   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-687224 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (39.774863644s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (116.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-290619 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-290619 "sudo systemctl is-active --quiet service kubelet": exit status 1 (202.112045ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestPause/serial/Start (104.11s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-922984 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-922984 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m44.108293097s)
--- PASS: TestPause/serial/Start (104.11s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.79s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-687224
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (99.62s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-788344 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-788344 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (1m39.616579579s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (99.62s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (92.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-993931 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
E0205 03:14:09.273618   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/functional-910650/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-993931 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (1m32.010985696s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (92.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-788344 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d05b2851-9071-4dd4-ae52-13e8ba855665] Pending
helpers_test.go:344: "busybox" [d05b2851-9071-4dd4-ae52-13e8ba855665] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d05b2851-9071-4dd4-ae52-13e8ba855665] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.005317856s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-788344 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-993931 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f0631ec6-043e-4edb-91a6-8e7df6d3a9b3] Pending
helpers_test.go:344: "busybox" [f0631ec6-043e-4edb-91a6-8e7df6d3a9b3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f0631ec6-043e-4edb-91a6-8e7df6d3a9b3] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004354481s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-993931 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-788344 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-788344 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (91.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-788344 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-788344 --alsologtostderr -v=3: (1m31.018043531s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (91.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-993931 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-993931 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-993931 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-993931 --alsologtostderr -v=3: (1m31.011654764s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-788344 -n no-preload-788344
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-788344 -n no-preload-788344: exit status 7 (69.588028ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-788344 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (348.66s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-788344 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-788344 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (5m48.306165024s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-788344 -n no-preload-788344
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (348.66s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-993931 -n embed-certs-993931
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-993931 -n embed-certs-993931: exit status 7 (64.404422ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-993931 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (313.75s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-993931 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
E0205 03:16:04.764639   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-993931 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (5m13.484677248s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-993931 -n embed-certs-993931
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (313.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (5.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-191773 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-191773 --alsologtostderr -v=3: (5.299585233s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (5.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-191773 -n old-k8s-version-191773
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-191773 -n old-k8s-version-191773: exit status 7 (66.283993ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-191773 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-bbtnv" [12c0addc-638e-4e49-b2f8-4552452c6037] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004458866s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-bbtnv" [12c0addc-638e-4e49-b2f8-4552452c6037] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003532487s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-993931 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-993931 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.78s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-993931 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-993931 -n embed-certs-993931
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-993931 -n embed-certs-993931: exit status 2 (259.555075ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-993931 -n embed-certs-993931
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-993931 -n embed-certs-993931: exit status 2 (247.3518ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-993931 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-993931 -n embed-certs-993931
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-993931 -n embed-certs-993931
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (84.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-568677 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-568677 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (1m24.536839077s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (84.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-nkdkp" [a1bb36b8-45f4-4de6-ba56-0d8917ddfb55] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-nkdkp" [a1bb36b8-45f4-4de6-ba56-0d8917ddfb55] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.005075926s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-nkdkp" [a1bb36b8-45f4-4de6-ba56-0d8917ddfb55] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006825826s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-788344 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-788344 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-788344 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-788344 -n no-preload-788344
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-788344 -n no-preload-788344: exit status 2 (253.475807ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-788344 -n no-preload-788344
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-788344 -n no-preload-788344: exit status 2 (261.308653ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-788344 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-788344 -n no-preload-788344
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-788344 -n no-preload-788344
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (45.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-437156 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-437156 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (45.277662107s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (45.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-437156 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-437156 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.160387199s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.68s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-437156 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-437156 --alsologtostderr -v=3: (10.675824699s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.68s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-568677 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fad83d76-75a8-4c65-a89c-d148bea10576] Pending
helpers_test.go:344: "busybox" [fad83d76-75a8-4c65-a89c-d148bea10576] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [fad83d76-75a8-4c65-a89c-d148bea10576] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004841273s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-568677 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-437156 -n newest-cni-437156
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-437156 -n newest-cni-437156: exit status 7 (65.051177ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-437156 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (36.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-437156 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-437156 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (36.13930395s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-437156 -n newest-cni-437156
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (36.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-568677 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-568677 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-568677 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-568677 --alsologtostderr -v=3: (1m31.029873268s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-437156 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-437156 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-437156 -n newest-cni-437156
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-437156 -n newest-cni-437156: exit status 2 (236.822092ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-437156 -n newest-cni-437156
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-437156 -n newest-cni-437156: exit status 2 (236.235146ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-437156 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-437156 -n newest-cni-437156
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-437156 -n newest-cni-437156
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (61.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-253147 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
E0205 03:24:09.273543   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/functional-910650/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:24:10.655106   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/no-preload-788344/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:24:10.661499   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/no-preload-788344/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:24:10.672845   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/no-preload-788344/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:24:10.694273   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/no-preload-788344/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:24:10.735787   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/no-preload-788344/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:24:10.817275   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/no-preload-788344/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:24:10.978834   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/no-preload-788344/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:24:11.300366   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/no-preload-788344/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:24:11.942010   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/no-preload-788344/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:24:13.223680   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/no-preload-788344/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:24:15.785034   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/no-preload-788344/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:24:20.907376   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/no-preload-788344/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:24:31.148934   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/no-preload-788344/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-253147 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m1.932379493s)
--- PASS: TestNetworkPlugins/group/auto/Start (61.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-568677 -n default-k8s-diff-port-568677
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-568677 -n default-k8s-diff-port-568677: exit status 7 (63.19583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-568677 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (323.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-568677 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-568677 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (5m23.547122418s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-568677 -n default-k8s-diff-port-568677
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (323.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-253147 "pgrep -a kubelet"
I0205 03:24:40.749861   19989 config.go:182] Loaded profile config "auto-253147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-253147 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-nkrdl" [2d612ef9-5261-47cd-8768-836a69249011] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-nkrdl" [2d612ef9-5261-47cd-8768-836a69249011] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004325691s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-253147 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-253147 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-253147 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (59.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-253147 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
E0205 03:25:32.344469   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/functional-910650/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:25:32.592115   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/no-preload-788344/client.crt: no such file or directory" logger="UnhandledError"
E0205 03:26:04.764712   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/addons-395572/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-253147 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (59.915896739s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (59.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-jlqlj" [3c9d6c91-ceff-47a7-90ff-5a97e567dea5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003226991s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-253147 "pgrep -a kubelet"
I0205 03:26:13.570262   19989 config.go:182] Loaded profile config "kindnet-253147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-253147 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-bmt8c" [c64d9d88-98de-4acd-8b9c-4c9758a7a334] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-bmt8c" [c64d9d88-98de-4acd-8b9c-4c9758a7a334] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004027012s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-253147 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-253147 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-253147 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (80.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-253147 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-253147 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m20.484657552s)
--- PASS: TestNetworkPlugins/group/calico/Start (80.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-dxl94" [f9256a89-304f-43e8-b428-07636ced62a9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004217932s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-253147 "pgrep -a kubelet"
I0205 03:28:05.110763   19989 config.go:182] Loaded profile config "calico-253147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-253147 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-dw896" [6b685cc6-50c7-4838-861c-3325255f4f39] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-dw896" [6b685cc6-50c7-4838-861c-3325255f4f39] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.003308299s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-253147 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-253147 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-253147 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (70.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-253147 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-253147 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m10.93803203s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (70.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (74.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-253147 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-253147 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m14.629842272s)
--- PASS: TestNetworkPlugins/group/flannel/Start (74.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-253147 "pgrep -a kubelet"
I0205 03:29:43.226134   19989 config.go:182] Loaded profile config "custom-flannel-253147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-253147 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-n5gzp" [b895baa0-5e3e-4f21-8bc1-e81766f6d8d1] Pending
E0205 03:29:43.516254   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/auto-253147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-n5gzp" [b895baa0-5e3e-4f21-8bc1-e81766f6d8d1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.004161413s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-253147 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-253147 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-253147 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-ktqv8" [58ace88c-5420-4034-8f2e-26345882033a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005239357s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-ktqv8" [58ace88c-5420-4034-8f2e-26345882033a] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003691322s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-568677 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-4dwrz" [1cc46d03-e704-4c91-be3f-934967a6b460] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003897719s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-568677 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-568677 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-568677 -n default-k8s-diff-port-568677
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-568677 -n default-k8s-diff-port-568677: exit status 2 (257.696978ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-568677 -n default-k8s-diff-port-568677
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-568677 -n default-k8s-diff-port-568677: exit status 2 (266.423711ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-568677 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-568677 -n default-k8s-diff-port-568677
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-568677 -n default-k8s-diff-port-568677
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (53.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-253147 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-253147 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (53.643207074s)
--- PASS: TestNetworkPlugins/group/bridge/Start (53.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-253147 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-253147 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-fvqk5" [d2a2587b-3a6a-4c57-a83d-2c672f0c9547] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-fvqk5" [d2a2587b-3a6a-4c57-a83d-2c672f0c9547] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.004322396s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (112.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-253147 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-253147 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m52.220036425s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (112.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-253147 exec deployment/netcat -- nslookup kubernetes.default
E0205 03:30:21.924250   19989 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12788/.minikube/profiles/auto-253147/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-253147 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-253147 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-253147 "pgrep -a kubelet"
I0205 03:31:05.531566   19989 config.go:182] Loaded profile config "bridge-253147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-253147 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-2cgkh" [bf03ad6f-c14f-407a-af72-8a1725bf34b0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-2cgkh" [bf03ad6f-c14f-407a-af72-8a1725bf34b0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003461413s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (21.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-253147 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-253147 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.142241581s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0205 03:31:31.918429   19989 retry.go:31] will retry after 781.408463ms: exit status 1
net_test.go:175: (dbg) Run:  kubectl --context bridge-253147 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Done: kubectl --context bridge-253147 exec deployment/netcat -- nslookup kubernetes.default: (5.128089251s)
--- PASS: TestNetworkPlugins/group/bridge/DNS (21.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-253147 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-253147 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-253147 "pgrep -a kubelet"
I0205 03:32:05.447046   19989 config.go:182] Loaded profile config "enable-default-cni-253147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-253147 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-dgjpg" [eb71cb90-d70e-461e-972e-bed0e6f94141] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-dgjpg" [eb71cb90-d70e-461e-972e-bed0e6f94141] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003122671s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-253147 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-253147 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-253147 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    

Test skip (40/321)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.32.1/cached-images 0
15 TestDownloadOnly/v1.32.1/binaries 0
16 TestDownloadOnly/v1.32.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.29
33 TestAddons/serial/GCPAuth/RealCredentials 0
39 TestAddons/parallel/Olm 0
46 TestAddons/parallel/AmdGpuDevicePlugin 0
50 TestDockerFlags 0
53 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
117 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
119 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
120 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
121 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestFunctionalNewestKubernetes 0
157 TestGvisorAddon 0
179 TestImageBuild 0
206 TestKicCustomNetwork 0
207 TestKicExistingNetwork 0
208 TestKicCustomSubnet 0
209 TestKicStaticIP 0
241 TestChangeNoneUser 0
244 TestScheduledStopWindows 0
246 TestSkaffold 0
248 TestInsufficientStorage 0
252 TestMissingContainerUpgrade 0
260 TestStartStop/group/disable-driver-mounts 0.16
264 TestNetworkPlugins/group/kubenet 3.41
273 TestNetworkPlugins/group/cilium 3.4
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.29s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-395572 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.29s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:480: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-365306" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-365306
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-253147 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-253147

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-253147

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-253147

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-253147

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-253147

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-253147

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-253147

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-253147

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-253147

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-253147

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253147"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253147"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253147"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-253147

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253147"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253147"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-253147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-253147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-253147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-253147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-253147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-253147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-253147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-253147" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253147"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253147"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253147"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253147"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253147"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-253147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-253147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-253147" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253147"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253147"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253147"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253147"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253147"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-253147

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253147"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253147"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253147"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253147"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253147"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253147"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253147"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253147"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253147"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253147"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253147"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253147"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253147"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253147"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253147"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253147"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253147"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-253147"

                                                
                                                
----------------------- debugLogs end: kubenet-253147 [took: 3.263992106s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-253147" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-253147
--- SKIP: TestNetworkPlugins/group/kubenet (3.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-253147 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-253147

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-253147

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-253147

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-253147

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-253147

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-253147

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-253147

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-253147

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-253147

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-253147

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253147"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253147"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253147"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-253147

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253147"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253147"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-253147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-253147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-253147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-253147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-253147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-253147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-253147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-253147" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253147"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253147"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253147"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253147"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253147"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-253147

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-253147

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-253147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-253147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-253147

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-253147

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-253147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-253147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-253147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-253147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-253147" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253147"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253147"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253147"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253147"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253147"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-253147

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253147"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253147"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253147"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253147"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253147"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253147"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253147"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253147"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253147"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253147"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253147"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253147"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253147"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253147"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253147"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253147"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253147"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-253147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-253147"

                                                
                                                
----------------------- debugLogs end: cilium-253147 [took: 3.259853632s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-253147" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-253147
--- SKIP: TestNetworkPlugins/group/cilium (3.40s)

                                                
                                    
Copied to clipboard